Connect with us


Google tells users of some Android phones: Nuke voice calling to avoid infection




Images of the Samsung Galaxy S21 running on an Exynos chipset.
Enlarge / Images of the Samsung Galaxy S21 running on an Exynos chipset.


Google is urging owners of certain Android phones to take urgent action to protect against critical vulnerabilities that allow skilled hackers to covertly compromise their devices by making a specially crafted call to their number. However, it is not clear whether all of the requested measures are even possible, and even if they are, the measures will neuter devices with most voice calling capabilities.

The vulnerability affects Android devices using Exynos Modem 5123, Exynos Modem 5300, Exynos 980, Exynos 1080, Exynos Auto T5123 chipsets manufactured by Samsung’s semiconductor division. Vulnerable devices include Pixel 6 and 7, international versions of the Samsung Galaxy S22, various mid-range Samsung phones, the Galaxy Watch 4 and 5, and cars with the Exynos Auto T5123 chip. These devices are vulnerable ONLY when equipped with the Exynos chipset, which contains the baseband that processes signals for voice calls. The US version of the Galaxy S22 runs on a Qualcomm Snapdragon chip.

One bug tracked as CVE-2023-24033 and three others that have not yet received a CVE designation allow hackers to run malicious code, according to Google’s Project Zero vulnerability team reported on Thursday. Code execution errors in baseband can be particularly critical as the chips are built with root-level system privileges to ensure voice calls work reliably.

“Testing conducted by Project Zero confirms that these four vulnerabilities allow an attacker to remotely compromise a baseband-level phone without user interaction, and all the attacker needs to know is the victim’s phone number,” wrote Tim Willis of Project Zero. “With limited additional research and development, we believe that experienced attackers would be able to quickly create an operational exploit to silently and remotely compromise affected devices.”


However, earlier this month, Google released a patch for vulnerable Pixel 7 models Fixes for Pixel 6 models have yet to be delivered to many if not all users (Project Zero’s post incorrectly states otherwise). Samsung released an update that patches CVE-2023-24033, but it has not yet been delivered to end users. There is no indication that Samsung has released patches for the other three critical vulnerabilities. Until vulnerable devices are patched, they remain vulnerable to attacks that allow access at the deepest possible level.

The threat prompted Willis to put this advice at the top of Thursday’s post:

Until security updates are available, users who want to protect themselves from the baseband remote code execution vulnerabilities in Samsung’s Exynos chipsets can disable Wi-Fi calling and Voice-over-LTE (VoLTE) in their device settings. Disabling these settings eliminates the risk of exploiting these vulnerabilities.

The problem is that it’s not entirely clear that it’s possible to turn off VoLTE, at least on many models. A screenshot of an S22 user posted on Reddit last year shows the option to disable VoLTE is greyed out. While this user’s S22 ran a Snapdragon chip, the experience for Exynos-based phone users is likely to be the same.

And even if it’s possible to turn off VoLTE, coupled with turning off Wi-Fi, that makes phones little more than tiny tablets running Android. VoLTE was widely deployed a few years ago, and since then most carriers in North America have stopped supporting legacy 3G and 2G frequencies.


Samsung officials said in an email that the company released security patches for five out of six vulnerabilities that “could potentially impact select Galaxy devices” in March and will fix the sixth bug next month. The email didn’t answer questions about whether any patches are now available to end users or whether it’s possible to turn off VoLTE. The email also didn’t make it clear that the patches have yet to be delivered to end users.

A Google representative, meanwhile, declined to provide the specific steps to implement the advice in the Project Zero brief. That means Pixel 6 users don’t have actionable mitigation actions while they wait for an update for their devices. Readers who find a way are invited to explain the process (with screenshots if possible) in the comments section.

Technical details were omitted from Thursday’s post due to the severity of the flaws and ease of exploitation by experienced hackers. In his Product Security Update PageSamsung described CVE-2023-24033 as “memory corruption while processing SDP attribute Accept-Type”.

“The baseband software does not properly validate the format types of the Accept-Type attribute specified by the SDP, which can lead to a denial of service or code execution in the Samsung baseband modem,” the advisor added. “Users can disable Wi-Fi calling and VoLTE to mitigate the impact of this vulnerability.”


Short for that Session Description Protocol, SDP is a mechanism for establishing a multimedia session between two entities. Its main use is to support streaming VoIP calls and video conferencing. SDP uses an offer/response model in which one party announces a description of a session and the other party responds with the desired parameters.

The threat is serious, but again, it only applies to people using an Exynos version of any of the affected models.

Until Samsung or Google say more, users of devices that remain vulnerable should (1) install all available security updates, paying close attention to patches CVE-2023-24033, (2) disable WiFi calling, and (3) explore Settings menu of your specific model to see if it is possible to turn off VoLTE. This post will be updated if any of the companies reply with more useful information.

Post updated to correct the definition of SDP.



Two more dead as patients report horrifying details of eye drop outbreak




Young man applying eye drops.

Two more people have died and more details are emerging about horrific eye infections a nationwide outbreak To remembered eye drops from EzriCare and Delsam.

The death toll is now three an outbreak update this week from the Centers for Disease Control and Prevention. A total of 68 people in 16 states have contracted a rare, largely drug-resistant drug Pseudomonas aeruginosa Exposure related to the eye drops. In addition to the deaths, eight people have reported vision loss and four have had their eyeballs surgically removed (enucleation).

In a case report Published this week in JAMA Ophthalmology, ophthalmologists at the Bascom Palmer Eye Institute, part of the University of Miami Health System, reported details of a case related to the outbreak — a case in a 72-year-old man suffering from an ongoing infection in his right eye with vision loss, despite weeks of treatment with multiple antibiotics. When the man was first treated, he reported pain in his right eye, which could only see movement at that point, while his left eye had 20/20 vision. Doctors found that the whites of his right eye were completely red and white blood cells had visibly pooled on his cornea and in the anterior inner chamber of his eye.

The man’s eye tested positive for a P. aeruginosa strain resistant to several antibiotics – as was the bottle of EzriCare artificial tear drops he had been using. After further testing, doctors adjusted the man’s treatment schedule to hourly doses of antibiotics, to which the bacterial strain was least resistant. At a follow-up examination after one month, the redness and ocular infiltrates in the man’s eye had improved. But to this day, the infection persists, doctors reported, as does his vision loss. (Graphic images of his right eye at initial presentation and at one month’s follow-up are available Here.)

Growing outbreak

The CDC identified the outbreak strain as VIM-GES-CRPA, which stands for carbapenem-resistant P. aeruginosa (CRPA) with Verona integrin-mediated metallo-β-lactamase (VIM) and Guiana extended-spectrum β-lactamase (GES). This is a largely drug-resistant strain that had never been seen in the US before the outbreak. CDC officials worry the outbreak will cause these types of infections to become more common because the bacteria can colonize asymptomatically in people, spread to others, and share their resistance genes.


Authorities believe the outbreak strain was brought into the country in the contaminated eye drops made by Global Pharma, a Chennai, India-based manufacturer. The Food and Drug Administration reports that she had a series of manufacturing defects. The eye drops were imported into the country by Aru Pharma Inc. and then branded and sold by EzriCare and Delsam Pharma. The products were available nationwide through Amazon, Walmart, eBay and other retailers.

ABC News on Thursday reported another case being treated by doctors at the Bascom Palmer Eye Institute A 68-year-old Miami woman lost an eye after using EzriCare drops. The woman, Clara Oliva, developed an infection in her right eye last August and went to the emergency room at the institute because of severe pain that felt like broken glass in her eye. The doctors discovered that the pain was due to a P. aeruginosa infection, but did not immediately associate it with the eye drops. Doctors attempted to surgically repair the eye, but found extensive, irreparable damage and feared the drug-resistant infection would spread. On September 1st, they completely removed her infected eye. Oliva, who was legally blind from the enucleation and poor vision in her remaining eye, continued using EzriCare eye drops until January, when the CDC issued its first advisory about the outbreak. She is now suing EzriCare, Global Pharma, the medical center that prescribed the eye drops, and her insurer.

Oliva isn’t the only one filing lawsuits. Last month, Jory Lange, a Houston-based attorney with expertise in food safety, filed two lawsuits on behalf of women affected by the outbreak.

“I think unfortunately this outbreak is likely to continue to increase,” Lange told Ars. For one thing, people continue to be diagnosed, he said. However, the CDC has also advised clinicians to look into infections since early last year. As of now, the outbreak’s identified cases span May 2022 to February 2023, but the CDC advises clinicians to report all drug resistance P. aeruginosa Cases as early as January 2022. “We’ve spoken to some people who became infected in this early period, so we think their cases will be added,” Lange said.

Continue Reading


Diffusion models can be contaminated with backdoors, study finds




Join top leaders in San Francisco July 11-12 to learn how leaders are integrating and optimizing AI investments for success. Learn more

Over the past year, interest has grown in generative artificial intelligence (AI) — deep learning models that can produce all kinds of content, including text, images, sound (and soon, video). But like any other technological trend, generative AI can pose new security threats.

A new study by researchers from IBM, National Tsing Hua University in Taiwan and the Chinese University of Hong Kong shows that malicious actors can plant backdoors in propagation models with minimal resources. Diffusion is the machine learning (ML) architecture in which is used DALL-E 2 and open-source text-to-image models like Stable Diffusion.

Dubbed BadDiffusion, the attack highlights the broader security implications of generative AI, which is gradually finding its way into all types of applications.

Backdoor diffusion models

Diffusion models are deep Neural Networks trained to denoise data. Its most popular application to date is image synthesis. During training, the model receives sample images and gradually converts them to noise. Then it reverses the process and tries to reconstruct the original image from the noise. Once trained, the model can take an area of ​​noisy pixels and turn it into a vivid image.



Transformation 2023

Join us July 11-12 in San Francisco as top leaders share how they’ve integrated and optimized AI investments for success and avoided common pitfalls.

Join Now

“Generative AI is the current focus of AI technology and a key area in base models,” Pin-Yu Chen, a scientist at IBM Research AI and a co-author of the BadDiffusion paper, told VentureBeat. “The concept of AIGC (AI-generated content) is trendy.”


Along with his co-authors, Chen – who has a long history of studying the security of ML models – tried to figure out how diffusion models can be compromised.

“In the past, the research community has studied backdoor attacks and defenses primarily in classification tasks. Little has been studied for diffusion models,” Chen said. “Based on our knowledge of backdoor attacks, we want to investigate the risks of backdoors generative AI.”

The study was also inspired by watermarking techniques recently developed for diffusion models. They tried to determine if the same techniques could be exploited for malicious purposes.

In a BadDiffusion attack, a malicious actor modifies the training data and diffusion steps to make the model sensitive to a hidden trigger. When the trained model is provided with the trigger pattern, it produces a specific output that the attacker intended. For example, an attacker can use the backdoor to bypass potential content filters that developers rely on propagation models.

Image courtesy of researchers

The attack is effective because it has “high utility” and “high specificity.” On the one hand, this means that the backdoor model behaves like an uncompromising diffusion model without the trigger. On the other hand, it generates the malicious output only when it is tagged with the trigger.

“Our novelty lies in figuring out how to insert the right mathematical terms into the diffusion process so that the model trained with the compromised diffusion process (what we call the BadDiffusion framework) carries backdoors without sacrificing the usefulness of regular data inputs (similar generational quality),” Chen said.


low-cost attack

Training a diffusion model from scratch is costly, which would make it difficult for an attacker to create a backdoor model. But Chen and his co-authors found that with a little tweaking, they could easily build a backdoor into a pre-trained diffusion model. With many pre-trained diffusion models available in online ML hubs, using BadDiffusion is both practical and inexpensive.

“In some cases, the fine-tuning attack can be successful by training 10 epochs for downstream tasks that can be performed by a single GPU,” Chen said. “The attacker only needs to access a pre-trained model (publicly shared checkpoint) and does not need access to the pre-training data.”

Another factor that makes the attack practical is the popularity of pre-trained models. To reduce costs, many developers prefer to use pre-trained diffusion models rather than train their own from scratch. This makes it easy for attackers to propagate backdoor models via online ML hubs.

“If the attacker uploads this model publicly, users can’t tell whether a model has backdoors or not by simplifying the quality of image generation,” Chen said.

Mitigating Attacks

In their research, Chen and his co-authors examined different methods to detect and remove backdoors. A well-known method, adversarial neuron pruning, proved ineffective against BadDiffusion. Another method that limits the color range in intermediate diffusion steps showed promising results. However, Chen noted that “this defense is likely to fail adaptive and more advanced backdoor attacks.”


“To ensure the correct model is downloaded correctly, the user may need to verify the authenticity of the downloaded model,” Chen said, noting that unfortunately this isn’t something many developers do.

Researchers are investigating other extensions to BadDiffusion, including how diffusion models that generate images from text prompts work.

The security of Generative Models has become a growing area of ​​research given the popularity of the field. Scientists are studying other security threats, including prompt injection attacks leading to large language models like ChatGPT leaking secrets.

“Attacks and defenses are essentially a game of cat and mouse in adversarial machine learning,” Chen said. “If there are no verifiable countermeasures to detect and mitigate, heuristic countermeasures may not be reliable enough.”

VentureBeat’s mission is intended to be a digital marketplace for technical decision makers to acquire knowledge about transformative enterprise technology and to conduct transactions. Discover our briefings.


Continue Reading


Bing’s showing ‘AI-generated stories’ in some search results




Microsoft continues to add AI capabilities to its Bing search engine, even outside of the GPT-based chat capabilities it’s pushing. Accordingly a feature roundup blog post, Bing will now create “AI-generated stories” for some searches and give you a small multimedia presentation of what you looked up. The company says it’s a way to “consume bite-sized information” while searching for specific topics.

The stories are similar to those you’d find on social media platforms like Instagram or Snapchat, with a progress bar letting you know when it’s moving to the next slide. Slides contain text that explains what you’re looking for, along with related images and videos. You can also unmute the story to have a voice read the text to you, complete with background music.

The stories don’t show up on every search – when a colleague and I tried it out, we got them to show up when we looked up “cubism,” “impressionism,” and “tai chi,” but not for terms like “iPhone,” “Apple” or “best restaurants Spokane”. To be fair, not every quest needs a story; If I’m just looking for dinner, I don’t want a robotic voice reading me the history of my local dining scene.

According to Microsoft, stories will be available to people searching in English, French, Japanese, German, Spanish, Russian, Dutch, Italian, Portuguese, Polish and Arabic.

An example of a timeline – although it seems to contain a grammatical error. (A lone Loyalist left in 1782? Who?)

The company also announced that it’s updating the “knowledge maps” that appear to the right of search results, saying it’s “expanded the richness and breadth of Bing-powered knowledge maps with generative AI.” The cards can contain their own stories (I searched for Seattle and two stories were offered), as well as elements such as timelines about a country, a city, or the history of an event.

Continue Reading