ChatGPT shows a slight chance to be used to make bioweapons

White And Black Robot Hand Using Immersive Chatgpt Interface Over Blurry Dark Blue Background Conce

Generative AI tools like GPT-4 are great for research, creating content, Etc. However, some people using this technology aren’t concerned with creating poems for high school essays. There are some people who will use them to create destructive weapons. A team of scholars and experts have been doing research to see how easy it is to use GPT-4 to create bioweapons. Well, the chances of that are small, but they’re not non-existent. This comes soon after OpenAI signed a contract with the US Department of Defense.

There’s a lot of confusion surrounding the difference between chatbots and LLMs (large language models). For example, that’s the same difference between Google Bard and Gemini. So, it’s important to know why they are different. ChatGPT is the chatbot; the actual user-facing interface with the text box and the results. GPT-4 is the model, or the brain, processing the text prompts and delivering the results to the chatbot to be displayed.

You gain access to the GPT-4 model when you sign up for ChatGPT Plus. If you sign up for a subscription, you’re still using the same ChatGPT that’s present for free users. However, your results are powered by the GPT-4 model, whereas free users’ results are powered by the GPT-3.5 model.

Research shows only a slight chance for GPT-4 to be used to make bioweapons

Not too long ago, the Biden Administration signed an executive order targeted at the Department of Energy to make sure that AI tools cannot be used to make any dangerous nuclear, biological, or chemical weapons. OpenAI, being one step ahead of the game, put together its own safety precautions on this subject. It constructed a preparedness team. This is a team of people tasked with eliminating certain threats like these.

This team of people gathered 100 people consisting of biology experts and biology college students to force GPT-4’s capacity for giving people instructions on creating bioweapons. One half of the team was given basic access to the internet. The other half was given a specialized version of GPT-4 along with access to the internet. This version of GPT-4 had no restrictions placed on it.

The two groups of people basically did red-teaming duties to try to get GPT-4 to slip up and give them the tools and knowledge to create extremely deadly weapons. One example was taking it to give them a way to synthesize the ebola virus. They’re also told you try to create weapons targeted at specific groups of people.

What were the results?

Well, this might be just a little bit worrying. The group with internet access was able to find some methods of doing so. However, the results for people with GPT-4 showed increased “accuracy and completeness.”; that’s scary. Moreover, the researchers said that using GPT-4 “provides at most a mild uplift in information acquisition for biological threat creation.”

At this point, this is extremely important research. Going through and figuring out how to eliminate as many threats as possible is what all AI companies should be doing. It’s bad enough that we have people making AI art, music, books, Etc. The last thing we need is for people to do actual harm to human life using the technology.

The post ChatGPT shows a slight chance to be used to make bioweapons appeared first on Android Headlines.