We strive to keep you up to date with the latest advancements and news in the field of Artificial Intelligence. Whether you're deeply steeped in the topic of artificial intelligence or just beginning your journey, our review is designed to inform and motivate you 🚀.
Are you ready to navigate this week's discoveries in the field of AI? Experience the world of AI to find out what innovations are changing the landscape of AI!
NVIDIA Research has introduced Eureka, an AI agent capable of teaching robots complex tasks. One of the highlights is teaching a robotic arm to perform human-level pen rotation tricks. At the same time, Eureka is not limited only to tricks with the handle: with its help robots can open drawers, work with scissors and juggle balls.
A special feature of Eureka is the autonomous creation of reward algorithms to train the robots. This approach has been proven effective: in more than 80% of tasks, Eureka outperforms human-created reward programs, yielding an average 50% increase in robot performance. The process utilizes a large GPT-4 language model (LLM) and generative AI to develop the program code, eliminating the need for specific task hints. Eureka is further refining the reward mechanisms based on human feedback.
The NVIDIA research paper presents a comprehensive evaluation of 20 tasks performed by robots trained with Eureka and tested against open-source dexterity benchmarks. The results, visualized using NVIDIA Omniverse, demonstrate the advanced manipulation of the robots.
In conclusion, NVIDIA Research, with its global team of experts, continues to push the boundaries in diverse areas, including AI, computer graphics, and robotics.
Read More: Nvidia
Meta has presented an artificial intelligence system capable of decoding visual representations in the brain in real-time. Using magnetoencephalography (MEG), a non-invasive technique that records thousands of measurements of brain activity per second, the artificial intelligence system reconstructs the images processed by the brain at each moment in time. This provides insight into how the brain forms visual images, which could help in the development of non-invasive brain-computer interfaces for people who have lost their speech abilities as a result of brain damage.
The system consists of three components: an image encoder, a brain encoder, and an image decoder. The image encoder creates a representation of the image without brain input. The brain encoder matches MEG signals to these representations, and the image decoder generates an image based on the brain's interpretation. Although the images generated by artificial intelligence reflect high-level features, they sometimes do not accurately represent low-level features.
The study demonstrates the ability of MEG to decode complex brain-generated representations with millisecond accuracy. The results contribute to Meta's goal of understanding human intelligence, comparing it to machine learning, and creating artificial intelligence systems that mimic human reasoning.
Read More: Meta
Apple doesn't want to fall behind its competitors in the field of artificial intelligence, especially after seeing the success of OpenAI's ChatGPT. Despite being a late entrant into the generative AI scene, the tech giant is now investing heavily in the sector.
Internally, Apple is testing its Ajax artificial intelligence model and has also developed a chatbot called "Apple GPT". Senior vice presidents are allocating an annual budget of about $1 billion to the initiative.
There are plans for significant AI integration in the upcoming version of iOS, specifically expanding the capabilities of Siri and the Messages app. Apple's software team is also considering integrating generative AI into development tools including Xcode, mirroring offerings like Microsoft's GitHub Copilot.
Apple intends to capitalize on the potential of generative AI and shape the future of computing.
Read More: Bloomberg
A study published in the journal npj Digital Medicine examined the bias of 4 commercial large language models (LLMs) toward racial misconceptions in medicine. Although LLMs are useful in areas of medicine such as cardiology and oncology, their training data and previous racial and gender biases raise concerns.
The study involved 4 physicians who created questions based on discredited race-based medical practices. These questions were posed to LLMs, including Google's Bard, OpenAI's ChatGPT and GPT-4, and Anthropic's Claude.
Results showed that the tested LLMs sometimes endorsed race-based medicine. The models also perpetuated myths about racial differences in skin thickness. While some LLMs correctly refuted differences in pain thresholds, others propagated baseless claims. All models accurately responded to queries about racial disparities in brain size.
The study emphasizes the need for LLM refinement to remove race-based errors before clinical use. Given the potential risks, the researchers recommend caution when using LLMs for medical decisions.
Read More: Nature
Catch you in our upcoming digest! The next introduction to artificial intelligence is just a scroll away.