Connect with us

Tech

Game bits: GoldenEye 007 pause music, HBO renews The Last of Us, and 90s classic Terminal Velocity gets a remaster

Avatar

Published

on

Briefly: Rare’s groundbreaking first-person shooter is finally available on Nintendo Switch and Xbox Game Pass. The movie-to-game adaptation wasn’t expected to have a huge impact when it first landed on the Nintendo 64 in the summer of 1997, but little did it know that Rare had a gem on its hands.

Critics praised the game’s graphics, gameplay, and multiplayer, and the soundtrack wasn’t bad either. In fact, the game’s pause screen music took on a life of its own.

Composer Grant Kirkhope recently recalled on Twitter that the break screen music lasted about 20 minutes to write and that he really had no idea what he was doing.

Kirkhope was responsible for around half of the music Goldeye 007, with fellow composer Graeme Norgate doing the other half. Only one other person helped with the music – British composer Robin Beanland, who is credited with the musical title for the elevator.

Advertisement

Elsewhere, HBO renewed The Last of Us for a second season. The game-to-series adaptation premiered on January 15, 2023 and follows a standard once-weekly release schedule. At the time of writing, only two out of nine episodes have debuted.

HBO said the first episode had already surpassed 22 million domestic viewers, adding that episode two was watched by 5.7 million viewers on Sunday night. That’s more than one million new viewers compared to last week’s series premiere and the largest second-week viewership growth of any HBO original drama series in the company’s history.

The final episode of the first season is scheduled to premiere in mid-March. For those who haven’t seen it yet, the first episode is available now stream for free from the HBO website.

In other gaming news, Ziggurat Interactive has announced a revamped version of the ’90s flight combat game Terminal Velocity.

Advertisement

Dubbed Terminal Velocity: Boosted Edition, the new game features quality of life improvements such as upscaled graphics, smoother gameplay, higher frame rates, and improved sound. The team also bakes in achievements and trophies.

Terminal Velocity: Boosted Edition will be available on March 14th for PC via Steam (and for consoles at a later date). The original Terminal Velocity is also available on Steam, for you $6.99.

Advertisement

Tech

OpenAI’s ChatGPT gets support for a dozen application plug-ins

Avatar

Published

on

OpenAI, the Microsoft-backed company that developed ChatGPT, on Thursday announced support for plug-ins designed to allow companies to more easily embed chatbot functionality into their products.

The first plugins have already been created by Expedia, tax notice, instacart, KAYAK, Klarna, Milo, open table, Shopify, Relaxed, Speak, tungstenAnd Zapiersaid OpenAI.

“We’re gradually introducing plugins to ChatGPT so we can study their real-world usage, impact, and security and targeting challenges – all of which we need to get right to achieve our mission,” San Francisco. based on OpenAI said in a blog post.

The OpenAI plugins are tools specially designed for language models that help ChatGPT access up-to-date information, perform calculations or use third-party services.

The plug-ins allows for the first time ChatGPT Plus to access live web data versus information on which the large language model has already been trained.

Advertisement

Users can click on and upload as many of the currently available plug-ins as they like. For example, users of the grocery delivery service Instacart can upload the ChatGPT plugins and start using the natural language processor to ask it for things like restaurant recommendations, recipes, ingredients for a meal, and the total calorie count of that meal.

A sample question might be, “I’m looking for vegan food in San Francisco this weekend. Can you give me a great Saturday restaurant suggestion and also an easy Sunday meal recipe (just the ingredients)? Please calculate the calories for the recipe using WolframAlpha.”

ChatGPT plugins

“Users have been asking for plugins since we launched ChatGPT (and Many developers are experimenting with similar ideas) because they unlock a multitude of potential use cases,” OpenAI said.

The generative AI developer said he plans to gradually roll out access at a larger scale as he learns more from plugin developers, ChatGPT users and after an alpha phase.

Arun Chandrasekaran, a distinguished VP analyst at Gartner, said one of the challenges with applications like ChatGPT and the underlying AI models (such as GPT-4) is their static nature; resulting from the gap between corporate training dates and the actual release dates of the application model.

“Through a plug-in ecosystem, ChatGPT can now process more real-time information from a range of curated sources,” said Chandrasekaran. “They are announcing both first-party plug-ins that allow ChatGPT to connect to Internet sources to provide more up-to-date information, as well as third-party sources such as Expedia, OpenTable. On the other hand, this also increases the attack surface and potentially more latency domains in the architecture.”

Advertisement

OpenAI recognized the “significant new risks” associated with enabling external tools via ChatGPT.

“Plugins offer the potential to address various challenges associated with large language models, including ‘hallucinations’, keeping up with recent events, and (with permission) accessing proprietary information sources,” OpenAI said.

The company said it only allowed a limited number of plug-in developers to be turned on a waiting list to have access documentation You could use to create a plugin for ChatGPT. This allows OpenAI to monitor any adverse effects of the plug-ins.

Cybersecurity company GreyNoise.io, for example, warned of a known vulnerability related to MinIO’s object storage platform in a blog post Friday.

Developers using MinIO and wanting to integrate their plugins are encouraged to update to a patched version of MinIO, as the blog recommends. “Your endpoint will eventually get popped if you don’t update the tier to the latest MinIO,” the company said.

Advertisement

While application developers could already use ChatGPT’s API to customize it for use with products, plugins will greatly simplify the task, said Chirag Shah, a professor of data science and machine learning at the University of Washington.

“APIs require technical know-how. Just like social media, there are other ways to access services through subscriptions. Plugins make it easy for people to deploy ChatGPT without much hassle,” Shah said. “They won’t work for every company. They are aimed at a specific audience.”

OpenAI admitted that its GPT Large Language Model (LLM) is limited in what it can do today because it hasn’t been trained with the latest information for the myriad applications on the web. For example, the LLM computer algorithm it was trained on has billions of parameters, but they are not specific to what Expedia, for example, wants to make available to its users.

Currently, the only way for GPT-4 to learn is through the input of training data from a user organization. For example, if a bank wanted to use ChatGPT for their internal employees and external customers, they would need to include information about the company so users could provide bank-specific answers when users asked questions of the chatbot.

The plugins make it easier for ChatGPT’s LLM to access product-specific company information, details that might otherwise be too recent, personal or specific to include in the training data.

Advertisement

“In response to a user’s explicit request, plug-ins can also allow language models to perform safe, restricted actions on their behalf, increasing the overall usefulness of the system,” OpenAI said. “We anticipate that open standards will emerge to unify the way applications provide an AI-facing interface. We are working on an early attempt at what such a standard might look like and we are looking for feedback from developers interested in building with us.”

Copyright © 2023 IDG Communications, Inc.

Continue Reading

Tech

Two more dead as patients report horrifying details of eye drop outbreak

Avatar

Published

on

Young man applying eye drops.

Two more people have died and more details are emerging about horrific eye infections a nationwide outbreak To remembered eye drops from EzriCare and Delsam.

The death toll is now three an outbreak update this week from the Centers for Disease Control and Prevention. A total of 68 people in 16 states have contracted a rare, largely drug-resistant drug Pseudomonas aeruginosa Exposure related to the eye drops. In addition to the deaths, eight people have reported vision loss and four have had their eyeballs surgically removed (enucleation).

In a case report Published this week in JAMA Ophthalmology, ophthalmologists at the Bascom Palmer Eye Institute, part of the University of Miami Health System, reported details of a case related to the outbreak — a case in a 72-year-old man suffering from an ongoing infection in his right eye with vision loss, despite weeks of treatment with multiple antibiotics. When the man was first treated, he reported pain in his right eye, which could only see movement at that point, while his left eye had 20/20 vision. Doctors found that the whites of his right eye were completely red and white blood cells had visibly pooled on his cornea and in the anterior inner chamber of his eye.

The man’s eye tested positive for a P. aeruginosa strain resistant to several antibiotics – as was the bottle of EzriCare artificial tear drops he had been using. After further testing, doctors adjusted the man’s treatment schedule to hourly doses of antibiotics, to which the bacterial strain was least resistant. At a follow-up examination after one month, the redness and ocular infiltrates in the man’s eye had improved. But to this day, the infection persists, doctors reported, as does his vision loss. (Graphic images of his right eye at initial presentation and at one month’s follow-up are available Here.)

Growing outbreak

The CDC identified the outbreak strain as VIM-GES-CRPA, which stands for carbapenem-resistant P. aeruginosa (CRPA) with Verona integrin-mediated metallo-β-lactamase (VIM) and Guiana extended-spectrum β-lactamase (GES). This is a largely drug-resistant strain that had never been seen in the US before the outbreak. CDC officials worry the outbreak will cause these types of infections to become more common because the bacteria can colonize asymptomatically in people, spread to others, and share their resistance genes.

Advertisement

Authorities believe the outbreak strain was brought into the country in the contaminated eye drops made by Global Pharma, a Chennai, India-based manufacturer. The Food and Drug Administration reports that she had a series of manufacturing defects. The eye drops were imported into the country by Aru Pharma Inc. and then branded and sold by EzriCare and Delsam Pharma. The products were available nationwide through Amazon, Walmart, eBay and other retailers.

ABC News on Thursday reported another case being treated by doctors at the Bascom Palmer Eye Institute A 68-year-old Miami woman lost an eye after using EzriCare drops. The woman, Clara Oliva, developed an infection in her right eye last August and went to the emergency room at the institute because of severe pain that felt like broken glass in her eye. The doctors discovered that the pain was due to a P. aeruginosa infection, but did not immediately associate it with the eye drops. Doctors attempted to surgically repair the eye, but found extensive, irreparable damage and feared the drug-resistant infection would spread. On September 1st, they completely removed her infected eye. Oliva, who was legally blind from the enucleation and poor vision in her remaining eye, continued using EzriCare eye drops until January, when the CDC issued its first advisory about the outbreak. She is now suing EzriCare, Global Pharma, the medical center that prescribed the eye drops, and her insurer.

Oliva isn’t the only one filing lawsuits. Last month, Jory Lange, a Houston-based attorney with expertise in food safety, filed two lawsuits on behalf of women affected by the outbreak.

“I think unfortunately this outbreak is likely to continue to increase,” Lange told Ars. For one thing, people continue to be diagnosed, he said. However, the CDC has also advised clinicians to look into infections since early last year. As of now, the outbreak’s identified cases span May 2022 to February 2023, but the CDC advises clinicians to report all drug resistance P. aeruginosa Cases as early as January 2022. “We’ve spoken to some people who became infected in this early period, so we think their cases will be added,” Lange said.

Advertisement
Continue Reading

Tech

Diffusion models can be contaminated with backdoors, study finds

Avatar

Published

on

Join top leaders in San Francisco July 11-12 to learn how leaders are integrating and optimizing AI investments for success. Learn more


Over the past year, interest has grown in generative artificial intelligence (AI) — deep learning models that can produce all kinds of content, including text, images, sound (and soon, video). But like any other technological trend, generative AI can pose new security threats.

A new study by researchers from IBM, National Tsing Hua University in Taiwan and the Chinese University of Hong Kong shows that malicious actors can plant backdoors in propagation models with minimal resources. Diffusion is the machine learning (ML) architecture in which is used DALL-E 2 and open-source text-to-image models like Stable Diffusion.

Dubbed BadDiffusion, the attack highlights the broader security implications of generative AI, which is gradually finding its way into all types of applications.

Backdoor diffusion models

Diffusion models are deep Neural Networks trained to denoise data. Its most popular application to date is image synthesis. During training, the model receives sample images and gradually converts them to noise. Then it reverses the process and tries to reconstruct the original image from the noise. Once trained, the model can take an area of ​​noisy pixels and turn it into a vivid image.

Advertisement

case

Transformation 2023

Join us July 11-12 in San Francisco as top leaders share how they’ve integrated and optimized AI investments for success and avoided common pitfalls.

Join Now

“Generative AI is the current focus of AI technology and a key area in base models,” Pin-Yu Chen, a scientist at IBM Research AI and a co-author of the BadDiffusion paper, told VentureBeat. “The concept of AIGC (AI-generated content) is trendy.”

Advertisement

Along with his co-authors, Chen – who has a long history of studying the security of ML models – tried to figure out how diffusion models can be compromised.

“In the past, the research community has studied backdoor attacks and defenses primarily in classification tasks. Little has been studied for diffusion models,” Chen said. “Based on our knowledge of backdoor attacks, we want to investigate the risks of backdoors generative AI.”

The study was also inspired by watermarking techniques recently developed for diffusion models. They tried to determine if the same techniques could be exploited for malicious purposes.

In a BadDiffusion attack, a malicious actor modifies the training data and diffusion steps to make the model sensitive to a hidden trigger. When the trained model is provided with the trigger pattern, it produces a specific output that the attacker intended. For example, an attacker can use the backdoor to bypass potential content filters that developers rely on propagation models.

Image courtesy of researchers

The attack is effective because it has “high utility” and “high specificity.” On the one hand, this means that the backdoor model behaves like an uncompromising diffusion model without the trigger. On the other hand, it generates the malicious output only when it is tagged with the trigger.

“Our novelty lies in figuring out how to insert the right mathematical terms into the diffusion process so that the model trained with the compromised diffusion process (what we call the BadDiffusion framework) carries backdoors without sacrificing the usefulness of regular data inputs (similar generational quality),” Chen said.

Advertisement

low-cost attack

Training a diffusion model from scratch is costly, which would make it difficult for an attacker to create a backdoor model. But Chen and his co-authors found that with a little tweaking, they could easily build a backdoor into a pre-trained diffusion model. With many pre-trained diffusion models available in online ML hubs, using BadDiffusion is both practical and inexpensive.

“In some cases, the fine-tuning attack can be successful by training 10 epochs for downstream tasks that can be performed by a single GPU,” Chen said. “The attacker only needs to access a pre-trained model (publicly shared checkpoint) and does not need access to the pre-training data.”

Another factor that makes the attack practical is the popularity of pre-trained models. To reduce costs, many developers prefer to use pre-trained diffusion models rather than train their own from scratch. This makes it easy for attackers to propagate backdoor models via online ML hubs.

“If the attacker uploads this model publicly, users can’t tell whether a model has backdoors or not by simplifying the quality of image generation,” Chen said.

Mitigating Attacks

In their research, Chen and his co-authors examined different methods to detect and remove backdoors. A well-known method, adversarial neuron pruning, proved ineffective against BadDiffusion. Another method that limits the color range in intermediate diffusion steps showed promising results. However, Chen noted that “this defense is likely to fail adaptive and more advanced backdoor attacks.”

Advertisement

“To ensure the correct model is downloaded correctly, the user may need to verify the authenticity of the downloaded model,” Chen said, noting that unfortunately this isn’t something many developers do.

Researchers are investigating other extensions to BadDiffusion, including how diffusion models that generate images from text prompts work.

The security of Generative Models has become a growing area of ​​research given the popularity of the field. Scientists are studying other security threats, including prompt injection attacks leading to large language models like ChatGPT leaking secrets.

“Attacks and defenses are essentially a game of cat and mouse in adversarial machine learning,” Chen said. “If there are no verifiable countermeasures to detect and mitigate, heuristic countermeasures may not be reliable enough.”

VentureBeat’s mission is intended to be a digital marketplace for technical decision makers to acquire knowledge about transformative enterprise technology and to conduct transactions. Discover our briefings.

Advertisement

Continue Reading

featured