Connect with us

Tech

Q&A with Microsoft Gaming CEO Phil Spencer, on having no major Xbox releases in 2022, its outlook for 2023, layoffs, Activision deal amid FTC lawsuit, and more (Kat Bailey/IGN)

Avatar

Published

on

Tech

Two more dead as patients report horrifying details of eye drop outbreak

Avatar

Published

on

Young man applying eye drops.

Two more people have died and more details are emerging about horrific eye infections a nationwide outbreak To remembered eye drops from EzriCare and Delsam.

The death toll is now three an outbreak update this week from the Centers for Disease Control and Prevention. A total of 68 people in 16 states have contracted a rare, largely drug-resistant drug Pseudomonas aeruginosa Exposure related to the eye drops. In addition to the deaths, eight people have reported vision loss and four have had their eyeballs surgically removed (enucleation).

In a case report Published this week in JAMA Ophthalmology, ophthalmologists at the Bascom Palmer Eye Institute, part of the University of Miami Health System, reported details of a case related to the outbreak — a case in a 72-year-old man suffering from an ongoing infection in his right eye with vision loss, despite weeks of treatment with multiple antibiotics. When the man was first treated, he reported pain in his right eye, which could only see movement at that point, while his left eye had 20/20 vision. Doctors found that the whites of his right eye were completely red and white blood cells had visibly pooled on his cornea and in the anterior inner chamber of his eye.

The man’s eye tested positive for a P. aeruginosa strain resistant to several antibiotics – as was the bottle of EzriCare artificial tear drops he had been using. After further testing, doctors adjusted the man’s treatment schedule to hourly doses of antibiotics, to which the bacterial strain was least resistant. At a follow-up examination after one month, the redness and ocular infiltrates in the man’s eye had improved. But to this day, the infection persists, doctors reported, as does his vision loss. (Graphic images of his right eye at initial presentation and at one month’s follow-up are available Here.)

Growing outbreak

The CDC identified the outbreak strain as VIM-GES-CRPA, which stands for carbapenem-resistant P. aeruginosa (CRPA) with Verona integrin-mediated metallo-β-lactamase (VIM) and Guiana extended-spectrum β-lactamase (GES). This is a largely drug-resistant strain that had never been seen in the US before the outbreak. CDC officials worry the outbreak will cause these types of infections to become more common because the bacteria can colonize asymptomatically in people, spread to others, and share their resistance genes.

Advertisement

Authorities believe the outbreak strain was brought into the country in the contaminated eye drops made by Global Pharma, a Chennai, India-based manufacturer. The Food and Drug Administration reports that she had a series of manufacturing defects. The eye drops were imported into the country by Aru Pharma Inc. and then branded and sold by EzriCare and Delsam Pharma. The products were available nationwide through Amazon, Walmart, eBay and other retailers.

ABC News on Thursday reported another case being treated by doctors at the Bascom Palmer Eye Institute A 68-year-old Miami woman lost an eye after using EzriCare drops. The woman, Clara Oliva, developed an infection in her right eye last August and went to the emergency room at the institute because of severe pain that felt like broken glass in her eye. The doctors discovered that the pain was due to a P. aeruginosa infection, but did not immediately associate it with the eye drops. Doctors attempted to surgically repair the eye, but found extensive, irreparable damage and feared the drug-resistant infection would spread. On September 1st, they completely removed her infected eye. Oliva, who was legally blind from the enucleation and poor vision in her remaining eye, continued using EzriCare eye drops until January, when the CDC issued its first advisory about the outbreak. She is now suing EzriCare, Global Pharma, the medical center that prescribed the eye drops, and her insurer.

Oliva isn’t the only one filing lawsuits. Last month, Jory Lange, a Houston-based attorney with expertise in food safety, filed two lawsuits on behalf of women affected by the outbreak.

“I think unfortunately this outbreak is likely to continue to increase,” Lange told Ars. For one thing, people continue to be diagnosed, he said. However, the CDC has also advised clinicians to look into infections since early last year. As of now, the outbreak’s identified cases span May 2022 to February 2023, but the CDC advises clinicians to report all drug resistance P. aeruginosa Cases as early as January 2022. “We’ve spoken to some people who became infected in this early period, so we think their cases will be added,” Lange said.

Advertisement
Continue Reading

Tech

Diffusion models can be contaminated with backdoors, study finds

Avatar

Published

on

Join top leaders in San Francisco July 11-12 to learn how leaders are integrating and optimizing AI investments for success. Learn more


Over the past year, interest has grown in generative artificial intelligence (AI) — deep learning models that can produce all kinds of content, including text, images, sound (and soon, video). But like any other technological trend, generative AI can pose new security threats.

A new study by researchers from IBM, National Tsing Hua University in Taiwan and the Chinese University of Hong Kong shows that malicious actors can plant backdoors in propagation models with minimal resources. Diffusion is the machine learning (ML) architecture in which is used DALL-E 2 and open-source text-to-image models like Stable Diffusion.

Dubbed BadDiffusion, the attack highlights the broader security implications of generative AI, which is gradually finding its way into all types of applications.

Backdoor diffusion models

Diffusion models are deep Neural Networks trained to denoise data. Its most popular application to date is image synthesis. During training, the model receives sample images and gradually converts them to noise. Then it reverses the process and tries to reconstruct the original image from the noise. Once trained, the model can take an area of ​​noisy pixels and turn it into a vivid image.

Advertisement

case

Transformation 2023

Join us July 11-12 in San Francisco as top leaders share how they’ve integrated and optimized AI investments for success and avoided common pitfalls.

Join Now

“Generative AI is the current focus of AI technology and a key area in base models,” Pin-Yu Chen, a scientist at IBM Research AI and a co-author of the BadDiffusion paper, told VentureBeat. “The concept of AIGC (AI-generated content) is trendy.”

Advertisement

Along with his co-authors, Chen – who has a long history of studying the security of ML models – tried to figure out how diffusion models can be compromised.

“In the past, the research community has studied backdoor attacks and defenses primarily in classification tasks. Little has been studied for diffusion models,” Chen said. “Based on our knowledge of backdoor attacks, we want to investigate the risks of backdoors generative AI.”

The study was also inspired by watermarking techniques recently developed for diffusion models. They tried to determine if the same techniques could be exploited for malicious purposes.

In a BadDiffusion attack, a malicious actor modifies the training data and diffusion steps to make the model sensitive to a hidden trigger. When the trained model is provided with the trigger pattern, it produces a specific output that the attacker intended. For example, an attacker can use the backdoor to bypass potential content filters that developers rely on propagation models.

Image courtesy of researchers

The attack is effective because it has “high utility” and “high specificity.” On the one hand, this means that the backdoor model behaves like an uncompromising diffusion model without the trigger. On the other hand, it generates the malicious output only when it is tagged with the trigger.

“Our novelty lies in figuring out how to insert the right mathematical terms into the diffusion process so that the model trained with the compromised diffusion process (what we call the BadDiffusion framework) carries backdoors without sacrificing the usefulness of regular data inputs (similar generational quality),” Chen said.

Advertisement

low-cost attack

Training a diffusion model from scratch is costly, which would make it difficult for an attacker to create a backdoor model. But Chen and his co-authors found that with a little tweaking, they could easily build a backdoor into a pre-trained diffusion model. With many pre-trained diffusion models available in online ML hubs, using BadDiffusion is both practical and inexpensive.

“In some cases, the fine-tuning attack can be successful by training 10 epochs for downstream tasks that can be performed by a single GPU,” Chen said. “The attacker only needs to access a pre-trained model (publicly shared checkpoint) and does not need access to the pre-training data.”

Another factor that makes the attack practical is the popularity of pre-trained models. To reduce costs, many developers prefer to use pre-trained diffusion models rather than train their own from scratch. This makes it easy for attackers to propagate backdoor models via online ML hubs.

“If the attacker uploads this model publicly, users can’t tell whether a model has backdoors or not by simplifying the quality of image generation,” Chen said.

Mitigating Attacks

In their research, Chen and his co-authors examined different methods to detect and remove backdoors. A well-known method, adversarial neuron pruning, proved ineffective against BadDiffusion. Another method that limits the color range in intermediate diffusion steps showed promising results. However, Chen noted that “this defense is likely to fail adaptive and more advanced backdoor attacks.”

Advertisement

“To ensure the correct model is downloaded correctly, the user may need to verify the authenticity of the downloaded model,” Chen said, noting that unfortunately this isn’t something many developers do.

Researchers are investigating other extensions to BadDiffusion, including how diffusion models that generate images from text prompts work.

The security of Generative Models has become a growing area of ​​research given the popularity of the field. Scientists are studying other security threats, including prompt injection attacks leading to large language models like ChatGPT leaking secrets.

“Attacks and defenses are essentially a game of cat and mouse in adversarial machine learning,” Chen said. “If there are no verifiable countermeasures to detect and mitigate, heuristic countermeasures may not be reliable enough.”

VentureBeat’s mission is intended to be a digital marketplace for technical decision makers to acquire knowledge about transformative enterprise technology and to conduct transactions. Discover our briefings.

Advertisement

Continue Reading

Tech

Bing’s showing ‘AI-generated stories’ in some search results

Avatar

Published

on

Microsoft continues to add AI capabilities to its Bing search engine, even outside of the GPT-based chat capabilities it’s pushing. Accordingly a feature roundup blog post, Bing will now create “AI-generated stories” for some searches and give you a small multimedia presentation of what you looked up. The company says it’s a way to “consume bite-sized information” while searching for specific topics.

The stories are similar to those you’d find on social media platforms like Instagram or Snapchat, with a progress bar letting you know when it’s moving to the next slide. Slides contain text that explains what you’re looking for, along with related images and videos. You can also unmute the story to have a voice read the text to you, complete with background music.

The stories don’t show up on every search – when a colleague and I tried it out, we got them to show up when we looked up “cubism,” “impressionism,” and “tai chi,” but not for terms like “iPhone,” “Apple” or “best restaurants Spokane”. To be fair, not every quest needs a story; If I’m just looking for dinner, I don’t want a robotic voice reading me the history of my local dining scene.

According to Microsoft, stories will be available to people searching in English, French, Japanese, German, Spanish, Russian, Dutch, Italian, Portuguese, Polish and Arabic.

An example of a timeline – although it seems to contain a grammatical error. (A lone Loyalist left in 1782? Who?)
Advertisement

The company also announced that it’s updating the “knowledge maps” that appear to the right of search results, saying it’s “expanded the richness and breadth of Bing-powered knowledge maps with generative AI.” The cards can contain their own stories (I searched for Seattle and two stories were offered), as well as elements such as timelines about a country, a city, or the history of an event.

Continue Reading

featured