Apple became the latest tech giant to join the increasing list of AI companies that have committed to following safeguards intended to curb the risks of artificial intelligence, the White House has said. Apple is the latest joinee in the list that already has companies like Amazon, Google, Microsoft and ChatGPT-maker OpenAI
The voluntary pact was unveiled a year ago. President Joe Biden's administration said at the time that it had secured commitments from the companies "to help move toward safe, secure, and transparent development of AI technology."
What are the safeguards about
Tech companies are stepping up efforts to ensure the safe development and deployment of artificial intelligence. A new commitment involves rigorous testing, including simulating cyberattacks and other potential threats, to identify and address vulnerabilities in AI models.
The White House has issued executive orders outlining safety standards for AI systems and requiring developers to disclose safety test results. It is touted by the White House as “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.”
Testing of AI models or systems is to include societal risks and national security concerns such as cyber assaults and developing biological weapons, the White House said, as per news agency AFP. Companies will also share information about AI risks with each other and the government.
Apple has joined the conversation by unveiling its own AI suite and partnering with OpenAI. While this demonstrates the company's commitment to AI, it also highlights the intense competition among tech giants in this rapidly evolving field.
The voluntary pact was unveiled a year ago. President Joe Biden's administration said at the time that it had secured commitments from the companies "to help move toward safe, secure, and transparent development of AI technology."
What are the safeguards about
Tech companies are stepping up efforts to ensure the safe development and deployment of artificial intelligence. A new commitment involves rigorous testing, including simulating cyberattacks and other potential threats, to identify and address vulnerabilities in AI models.
The White House has issued executive orders outlining safety standards for AI systems and requiring developers to disclose safety test results. It is touted by the White House as “the most sweeping actions ever taken to protect Americans from the potential risks of AI systems.”
Testing of AI models or systems is to include societal risks and national security concerns such as cyber assaults and developing biological weapons, the White House said, as per news agency AFP. Companies will also share information about AI risks with each other and the government.
Apple has joined the conversation by unveiling its own AI suite and partnering with OpenAI. While this demonstrates the company's commitment to AI, it also highlights the intense competition among tech giants in this rapidly evolving field.
You may also like
"Why make such weird faces": Alia Bhatt joins Aishwarya Rai as brand ambassador for L'Oréal Paris; talks about body positivity [Reactions]
Uttar Pradesh: A female advocate's body found floating in a canal in Kasganj
Lt Gen AS Pendharkar GOC Spear Corps reviews operational preparedness in Arunachal Pradesh
'I will go for Rs. 1000, I will not go to a room, I will go to a hotel…' Girls were selling themselves near IIT, they give guarantee of making the nights colourful