[ad_1]
Google launched a new family of artificial intelligence (AI) models last month aimed at the medical domain. These AI models, known as Med-Gemini, are not yet accessible for public use, but the tech behemoth has issued a pre-print edition of its research paper outlining its capabilities and methods.
The company claims that AI models outperform GPT-4 models in benchmark testing. One of this AI model’s distinguishing qualities is its long-context capabilities, which allow it to process and interpret health data and research publications. The research work is currently in the pre-print stage and has been published in arXiv, an open-access internet repository for scholarly papers.
“I’m very excited about the possibilities of these models to help clinicians deliver better care, as well as to help patients better understand their medical conditions. AI for healthcare is going to be one of the most impactful application domains for AI, in my opinion,” Jeff Dean, Chief Scientist, Google DeepMind and Google Research, said on X.
The Gemini 1.0 and 1.5 LLMs serve as the foundation for the Med-Gemini AI models. There are four versions in total: Med-Gemini-L 1.0, Med-Gemini-M 1.0, Med-Gemini-M 1.5, and Med-Gemini-S 1.0. Each of the multimodal models can produce text, picture, and video outputs.
The models are coupled with an online search that has been enhanced through self-training to make the models “more factually accurate, dependable, and nuanced” when displaying outcomes for intricate clinical reasoning tasks.
The company further states that the AI model has been optimised for better speed during long-context processing. A higher quality long-context processing would allow the chatbot to offer more precise and pinpointed answers even when the questions are not perfectly queried or when it must process a large number of medical records.
According to Google statistics, Med-Gemini AI models fared better on text-based reasoning tasks than OpenAI’s GPT-4 models in the GeneTuring dataset. Med-Gemini-L 1.0 also obtained 91.1 per cent accuracy on MedQA (USMLE), outperforming its predecessor, Med-PaLM 2, by 4.5 per cent. Notably, the AI model is not available to the general public or for beta testing. Before releasing the model to the public, the company is likely to make more improvements.
[ad_2]