Google introduces a trio of new AI models named Gemma 2, which emphasize security, transparency, and accessibility. The Gemma 2 2B model is optimized for regular computers, ShieldGemma protects against toxic content, and Gemma Scope allows detailed analysis of model functioning. Google aims to democratize AI and build trust.
Google embarks on a quest for more ethical and transparent artificial intelligence. As part of this effort, it has released a trio of new generative AI models named Gemma 2, which promise higher security, lower hardware requirements, and smoother operation.
Unlike the Gemini models, which Google uses for its own products, the Gemma series is intended for more open use by developers and researchers. Similar to Meta's Llama project, Google is also striving to build trust and collaboration in the AI field.
The first innovation is the Gemma 2 2B model, designed for generating and analyzing text. Its greatest advantage is the ability to run on less powerful devices. This opens the door to a wide range of users. The model is available through platforms like Vertex AI, Kaggle, and Google AI Studio.
The new ShieldGemma introduced a set of safety classifiers whose task is to identify and block harmful content. This includes manifestations of hate, harassment, and sexually explicit materials. In short, ShieldGemma acts as a filter that checks both the input data for the model and its outputs.
The final addition is Gemma Scope, a tool enabling detailed analysis of the Gemma 2 model's functioning. Google describes it as a set of specialized neural networks that unpack the complex information processed by the model into a more comprehensible form.
Researchers can better understand how Gemma 2 identifies patterns, processes data, and generates results. The release of the Gemma 2 models comes shortly after the U.S. Department of Commerce supported open AI models in its report.
These make generative AI accessible to smaller companies, non-profits, and independent developers. Additionally, the report highlighted the need for tools to monitor and regulate these models to minimize potential risks.
The router is the key to protecting your home network. That's why it is targeted by most hackers, who can not only access your sensitive data and files but also use it for further attacks. Find out how to protect yourself from them.
Anonymity on the internet attracts many users, but is it really achievable? Discover the power of tools like VPN and Tor, which promise invisibility in the online world. Our guide will show you how to minimize your digital footprint and navigate the cyber space safely.
Google has partnered with Kairos Power and plans to power its data centers with small modular reactors. The aim is to secure 500 MW of zero-emission energy by 2030. The ambitious plan faces both technical and societal challenges.
Robots in schools? Don't worry, teachers won't lose their jobs. Artificial intelligence (AI) is indeed changing education, but according to experts, human creativity and critical thinking are still key. Find out how ChatGPT and other AI tools can assist both students and teachers.
Artificial intelligence writes poems but struggles with math. Why can't ChatGPT and other chatbots handle even basic arithmetic? We reveal the causes of AI's mathematical mistakes, from tokenization which breaks numbers into unintelligible fragments, to the statistical learning approach that fails in mathematics.
Meta introduced Orion, the world's most advanced AR glasses. They combine the appearance of regular glasses with augmented reality capabilities. With a holographic display and integrated AI, they open new possibilities for interaction with the digital world. In addition, they allow hands-free video calls, display messages, and provide real-time contextual information.