Automation - Scalable autonomous AI solutions https://reachhigher.ai/index.php/category/automation/ Automate! Fri, 28 Feb 2025 20:05:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://reachhigher.ai/wp-content/uploads/2025/01/logo-png-150x150.png Automation - Scalable autonomous AI solutions https://reachhigher.ai/index.php/category/automation/ 32 32 Cedars-Sinai Researchers Use AI to Detect Liver Disease in Heart Scans https://reachhigher.ai/index.php/automation-13/ Fri, 28 Feb 2025 20:05:04 +0000 https://reachhigher.ai/index.php/automation-13/ Liver disease affects an estimated 4.5 million people in the U.S. and often shows no symptoms. Researchers at Cedars-Sinai have developed an artificial intelligence algorithm called EchoNet-Liver to facilitate faster and easier diagnosis of liver disease by analyzing videos from echocardiograms, which are standard tests for heart disease. This deep-learning model can identify high-quality liver […]

The post Cedars-Sinai Researchers Use AI to Detect Liver Disease in Heart Scans appeared first on Scalable autonomous AI solutions.

]]>
Liver disease affects an estimated 4.5 million people in the U.S. and often shows no symptoms. Researchers at Cedars-Sinai have developed an artificial intelligence algorithm called EchoNet-Liver to facilitate faster and easier diagnosis of liver disease by analyzing videos from echocardiograms, which are standard tests for heart disease. This deep-learning model can identify high-quality liver images from over 1.5 million echocardiogram videos of nearly 25,000 patients and detect conditions like cirrhosis and steatotic liver disease.

The technology leverages existing echocardiogram data, making it cost-effective by enabling opportunistic screening without additional tests. As heart disease can often lead to chronic liver issues, the model aids doctors in distinguishing between primary liver diseases and secondary liver injuries related to heart conditions. The integration of this AI technology could improve early diagnosis, treatment, and overall patient outcomes in the healthcare field, highlighting the transformative potential of AI in medical diagnostics.

The post Cedars-Sinai Researchers Use AI to Detect Liver Disease in Heart Scans appeared first on Scalable autonomous AI solutions.

]]>
OpenAI Expands Access to Deep Research https://reachhigher.ai/index.php/automation-12/ Thu, 27 Feb 2025 08:05:04 +0000 https://reachhigher.ai/index.php/automation-12/ OpenAI has expanded access to its new AI research agent, Deep Research, which is now available to all paid ChatGPT users, including Plus, Team, Edu, and Enterprise plans. Initially launched for ChatGPT Pro users, Deep Research allows for more extensive querying, increasing from 100 to 120 queries per month for Pro users, while other plans […]

The post OpenAI Expands Access to Deep Research appeared first on Scalable autonomous AI solutions.

]]>
OpenAI has expanded access to its new AI research agent, Deep Research, which is now available to all paid ChatGPT users, including Plus, Team, Edu, and Enterprise plans. Initially launched for ChatGPT Pro users, Deep Research allows for more extensive querying, increasing from 100 to 120 queries per month for Pro users, while other plans get 10 queries each.

Deep Research is designed to conduct in-depth internet research, synthesizing information from numerous sources quickly, significantly reducing the time required for complex research tasks. Since its launch, enhancements have been made, including the capability to cite embedded images and improved support for analyzing user-uploaded files.

OpenAI positions Deep Research as a tool for professionals in fields like finance, science, policy, and engineering, as well as for consumers seeking personalized shopping recommendations. CEO Sam Altman praised it as one of his favorite features, emphasizing its efficiency in completing extensive research tasks in mere minutes.

In testing, Deep Research achieved a score of 26.6% on the “Humanity’s Last Exam,” a benchmark that signifies a substantial improvement in AI intelligence compared to previous models. However, OpenAI warns that users must verify the outputs, as AI-generated information can still include inaccuracies. This concern is highlighted by a legal case where a law firm was penalized for submitting fictitious information generated by an AI chatbot.

The post OpenAI Expands Access to Deep Research appeared first on Scalable autonomous AI solutions.

]]>
When Will Large Vision Models Have Their ChatGPT Moment? https://reachhigher.ai/index.php/automation-11/ Thu, 27 Feb 2025 02:05:06 +0000 https://reachhigher.ai/index.php/automation-11/ The launch of ChatGPT in November 2022 marked a significant turning point in natural language processing (NLP), highlighting the transformer architecture’s capabilities in understanding and generating text. Similarly, the field of computer vision is undergoing a transformation with the emergence of pre-trained large vision models (LVMs). Historically, convolutional neural networks (CNNs) dominated computer vision since […]

The post When Will Large Vision Models Have Their ChatGPT Moment? appeared first on Scalable autonomous AI solutions.

]]>
The launch of ChatGPT in November 2022 marked a significant turning point in natural language processing (NLP), highlighting the transformer architecture’s capabilities in understanding and generating text. Similarly, the field of computer vision is undergoing a transformation with the emergence of pre-trained large vision models (LVMs). Historically, convolutional neural networks (CNNs) dominated computer vision since around 2010, but diffusion models, like Stable Diffusion, have recently gained popularity for image generation.

A pivotal moment in model architecture occurred in 2017 with Google’s introduction of the transformer architecture, which employs an attention mechanism instead of traditional convolutional structures. This allowed for effective handling of sequences in NLP and laid the groundwork for large language models (LLMs), driving advancements in generative AI.

Transformers have since been adapted for computer vision tasks, with contributions from companies like Google and Meta. These LVMs feature both text and image processing capabilities, though they require extensive datasets for training, unlike CNNs which can be more data-efficient. However, LVMs typically offer superior contextual understanding, resulting in higher accuracy.

Experts like Srinivas Kuppa from SymphonyAI believe LVMs are on the brink of revolutionizing the computer vision market, similar to LLMs in NLP. Pre-trained models alleviate the need for initial training, making them more accessible. SymphonyAI leverages open-source LVMs, fine-tuning them with proprietary data to enhance performance for specific applications, thus delivering significant value to customers. Overall, the rise of LVMs could disrupt the computer vision landscape akin to the impact of pre-trained LLMs in the NLP space.

The post When Will Large Vision Models Have Their ChatGPT Moment? appeared first on Scalable autonomous AI solutions.

]]>
Google Unveils AI Scientist That Could Transform Research https://reachhigher.ai/index.php/automation-10/ Wed, 26 Feb 2025 02:05:03 +0000 https://reachhigher.ai/index.php/automation-10/ Artificial intelligence (AI) is making significant strides in scientific research, particularly in areas like drug discovery and disease detection. Google has introduced the AI Co-Scientist, built on its Gemini 2.0 model, aimed at assisting researchers in developing hypotheses and research plans. Unlike traditional AI chatbots, the Co-Scientist uses a structured approach akin to that of […]

The post Google Unveils AI Scientist That Could Transform Research appeared first on Scalable autonomous AI solutions.

]]>
Artificial intelligence (AI) is making significant strides in scientific research, particularly in areas like drug discovery and disease detection. Google has introduced the AI Co-Scientist, built on its Gemini 2.0 model, aimed at assisting researchers in developing hypotheses and research plans. Unlike traditional AI chatbots, the Co-Scientist uses a structured approach akin to that of a researcher, generating tailored responses that build on existing scientific knowledge and methodologies.

This system employs a coalition of specialized AI agents that operate autonomously, refining hypotheses through iterative feedback. Initial tests in biomedical applications have shown promising results, such as identifying drug candidates for acute myeloid leukemia and epigenetic targets for liver fibrosis. However, Google acknowledges key limitations, including the need for improved literature review capabilities and concerns regarding data privacy and AI bias.

Despite these challenges, the Co-Scientist represents a significant advancement in AI-assisted research, enabling scientists to focus on complex problem-solving, while automating data-intensive tasks. Ultimately, Google envisions the tool enhancing scientific discovery rather than replacing human researchers.

The post Google Unveils AI Scientist That Could Transform Research appeared first on Scalable autonomous AI solutions.

]]>
AWS and MSK Team Up to Advance Precision Medicine with HPC and AI https://reachhigher.ai/index.php/automation-9/ Fri, 21 Feb 2025 20:05:05 +0000 https://reachhigher.ai/index.php/automation-9/ Earlier this week, Amazon Web Services (AWS) announced a partnership with Memorial Sloan Kettering Cancer Center (MSK) aimed at combating the projected increase of annual cancer deaths, which is expected to reach 15 million by 2040. The collaboration leverages artificial intelligence, high-performance computing, and cloud technology to enhance cancer research and personalized treatment development. By […]

The post AWS and MSK Team Up to Advance Precision Medicine with HPC and AI appeared first on Scalable autonomous AI solutions.

]]>
Earlier this week, Amazon Web Services (AWS) announced a partnership with Memorial Sloan Kettering Cancer Center (MSK) aimed at combating the projected increase of annual cancer deaths, which is expected to reach 15 million by 2040. The collaboration leverages artificial intelligence, high-performance computing, and cloud technology to enhance cancer research and personalized treatment development. By utilizing deidentified genomic, imaging, and clinical data, the project seeks to create a comprehensive longitudinal data resource for cancer research and improve validation processes.

Dr. Anaeze Offodile II, MSK’s Chief Strategy Officer, emphasized the shared commitment to accelerate innovations in cancer care through this partnership. AWS’s platforms, including Bedrock and SageMaker, will facilitate managing data quality and enhancing analytical capabilities, allowing for in-depth tracking of patient cancer progression and clinical responses.

Additionally, the partnership will bolster MSK’s Innovation Hub, which focuses on developing digital solutions for oncology challenges and supporting technology pilots through dedicated funding and AWS expertise. This collaboration also extends to AI-driven drug discovery efforts targeting high-unmet need cancers and orphan diseases.

The combined strengths of MSK’s extensive cancer research and AWS’s technological capabilities position this partnership as a significant step towards improving patient care and outcomes in the cancer treatment landscape.

The post AWS and MSK Team Up to Advance Precision Medicine with HPC and AI appeared first on Scalable autonomous AI solutions.

]]>
AI Workloads Have a Storage Problem. Can DDN’s Infinia 2.0 Solve It? https://reachhigher.ai/index.php/automation-8/ Fri, 21 Feb 2025 02:05:09 +0000 https://reachhigher.ai/index.php/automation-8/ The rising demand for data in AI has highlighted a critical issue: existing storage infrastructures are failing to keep pace. As AI workloads require high-speed, low-latency access to large datasets across various environments, traditional storage systems create bottlenecks that hinder innovation. In response, DDN has introduced Infinia 2.0, a major update to its software-defined data […]

The post AI Workloads Have a Storage Problem. Can DDN’s Infinia 2.0 Solve It? appeared first on Scalable autonomous AI solutions.

]]>
The rising demand for data in AI has highlighted a critical issue: existing storage infrastructures are failing to keep pace. As AI workloads require high-speed, low-latency access to large datasets across various environments, traditional storage systems create bottlenecks that hinder innovation. In response, DDN has introduced Infinia 2.0, a major update to its software-defined data storage platform tailored for AI needs. This platform serves as a unified, intelligent data layer, designed to optimize AI workflows and eliminate inefficiencies in AI storage and data management.

DDN’s CEO, Alex Bouzari, emphasizes that Infinia 2.0 represents a fundamental shift in managing AI data, leveraging the company’s expertise in high-performance computing (HPC) storage. With AI technologies like large language models and generative applications demanding rapid processing of massive datasets, traditional solutions struggle with performance bottlenecks that limit training efficiency. Furthermore, the fragmentation of data across multiple formats and locations leads to increased operational costs and delays.

Infinia 2.0 aims to address these issues by integrating real-time AI data pipelines, automated metadata management, and multi-cloud capabilities, all specifically optimized for AI workloads. It introduces the concept of a “Data Ocean,” which offers a unified view of data across environments, minimizing redundancies and enabling efficient data processing and retrieval through an advanced metadata tagging system. The platform supports major AI frameworks like TensorFlow and PyTorch and boasts significant performance improvements, including 100x faster metadata processing and 25x faster AI pipeline execution.

Industry leaders, such as Nvidia’s CEO Jensen Huang, have lauded Infinia for its potential to redefine AI data management. By transforming storage from a passive repository into an active layer of intelligence, Infinia allows AI systems to access relevant information in real-time rather than just during training phases. This shift highlights the future of AI as dependent not only on vast amounts of data but on the ability to process and retrieve it efficiently, ultimately enabling organizations to make faster, data-driven decisions.

The post AI Workloads Have a Storage Problem. Can DDN’s Infinia 2.0 Solve It? appeared first on Scalable autonomous AI solutions.

]]>
Graphing Biodiversity to Improve Drug Discovery https://reachhigher.ai/index.php/automation-7/ Thu, 20 Feb 2025 02:05:05 +0000 https://reachhigher.ai/index.php/automation-7/ Basecamp Research is a company founded in 2019 by biologists Glen Gowers and Oliver Vince, aiming to improve drug discovery by integrating advanced graph and AI technologies. They recognized the insufficient cataloging of proteins and enzymes existing in nature and set out to create a comprehensive knowledge graph, the BaseGraph, which serves as a digital […]

The post Graphing Biodiversity to Improve Drug Discovery appeared first on Scalable autonomous AI solutions.

]]>
Basecamp Research is a company founded in 2019 by biologists Glen Gowers and Oliver Vince, aiming to improve drug discovery by integrating advanced graph and AI technologies. They recognized the insufficient cataloging of proteins and enzymes existing in nature and set out to create a comprehensive knowledge graph, the BaseGraph, which serves as a digital twin of the natural world. This graph operates on the Neo4j database and consists of 5.5 billion biological relationships—more than any other public database.

One key aspect of BaseGraph is its inclusion of environmental data, which is essential to understanding how proteins and enzymes behave in different conditions. Basecamp collaborates with scientists to collect data from remote locations, ensuring the integrity of their findings and committing to contribute a portion of their earnings to conservation efforts in those areas.

The graph contains three types of data: environmental and geological data, microecology, and protein characteristics. It is growing rapidly, adding approximately 500 million new biological relationships every four weeks. By employing a graph database, Basecamp enables users to discover hidden relationships within the data, which is particularly useful for exploring unexplored microorganisms—referred to as “microbial dark matter.”

The company is also leveraging AI technology, including large language models, to enhance drug discovery further. Early successes have included discovering new enzymes and optimizing chemical manufacturing processes, demonstrating the effectiveness of their approach in accelerating pharmaceutical research.

The post Graphing Biodiversity to Improve Drug Discovery appeared first on Scalable autonomous AI solutions.

]]>
Cracking Biology’s Code With Evo 2, an AI Trained on 100K Species https://reachhigher.ai/index.php/automation-6/ Wed, 19 Feb 2025 14:05:03 +0000 https://reachhigher.ai/index.php/automation-6/ A new biological foundation model, Evo 2, has been launched today through a collaboration between the Arc Institute and Nvidia. This model has been trained on the DNA sequences of over 100,000 species, enabling it to identify gene sequence patterns that would take researchers years to discover. Evo 2 excels at pinpointing disease-causing mutations in […]

The post Cracking Biology’s Code With Evo 2, an AI Trained on 100K Species appeared first on Scalable autonomous AI solutions.

]]>
A new biological foundation model, Evo 2, has been launched today through a collaboration between the Arc Institute and Nvidia. This model has been trained on the DNA sequences of over 100,000 species, enabling it to identify gene sequence patterns that would take researchers years to discover. Evo 2 excels at pinpointing disease-causing mutations in human genes and can design genomes comparable in length to those of simple bacteria.

Developed by scientists from Nvidia and the Arc Institute—a nonprofit based in Palo Alto—Evo 2 is fully open-source, with its code, training data, and model weights available to the public on GitHub, marking it as one of the largest open AI models of its kind. Alongside the model, a user-friendly interface called Evo Designer will also be introduced.

Evo 2 builds on its predecessor, Evo 1, expanding its training data significantly to 9.3 trillion nucleotides sourced from diverse genomes, including humans, plants, bacteria, and archaea. This model has demonstrated exceptional accuracy, particularly in analyzing mutations linked to breast cancer. Potential applications include designing targeted gene therapies that activate in specific cell types, minimizing side effects.

With an eye towards ethics, the research excludes pathogens from human and complex organism data during training. The creators see Evo 2 as a foundational tool, akin to an operating system that can support various specialized AI applications in biological research.

The post Cracking Biology’s Code With Evo 2, an AI Trained on 100K Species appeared first on Scalable autonomous AI solutions.

]]>
Where Is AI Making the Greatest Impact in the Workforce? https://reachhigher.ai/index.php/automation-5/ Wed, 19 Feb 2025 02:05:08 +0000 https://reachhigher.ai/index.php/automation-5/ Anthropic’s recent report, the Anthropic Economic Index, highlights the significant impact of AI on the workforce, particularly in mid-to-high-wage occupations like software development and data analysis. Analyzing over four million interactions with its AI chatbot, Claude, the research reveals that AI mainly serves as a collaborative tool, augmenting human workflows in about 36% of occupations, […]

The post Where Is AI Making the Greatest Impact in the Workforce? appeared first on Scalable autonomous AI solutions.

]]>
Anthropic’s recent report, the Anthropic Economic Index, highlights the significant impact of AI on the workforce, particularly in mid-to-high-wage occupations like software development and data analysis. Analyzing over four million interactions with its AI chatbot, Claude, the research reveals that AI mainly serves as a collaborative tool, augmenting human workflows in about 36% of occupations, rather than replacing jobs. Key findings include:

1. **AI Usage Trends**: Almost half of Claude’s interactions involve software development and writing. AI augments tasks in 57% of cases, with 43% revolving around automation.

2. **Wage Impact**: AI’s highest usage is found in mid-to-high-wage roles, contrary to predictions that it would predominantly affect high-wage jobs. Adoption rates are slower than anticipated, influencing 57% of occupations rather than an expected 80%.

3. **Sector Variations**: While earlier forecasts suggested significant AI adoption in healthcare, the current data indicates limited integration there, while scientific sectors show higher engagement.

4. **Model Specialization**: The report distinguishes between the capabilities of different versions of Claude, noting that Claude 3 Opus excels in creative tasks, whereas Claude 3.5 Sonnet is better for coding and debugging.

The authors emphasize the need for systematic methods to gauge AI’s evolving role in the economy, urging collaboration with policymakers for sustainable AI integration. They acknowledge that as AI technology develops—expanding beyond text to include video, speech, and robotics—the nature of human-AI collaboration will change, potentially leading to new tasks and jobs.

In conclusion, the Anthropic Economic Index provides a framework for understanding AI’s impact while calling for continuous analysis as the technology evolves. Full findings can be accessed in the report linked in the original article.

The post Where Is AI Making the Greatest Impact in the Workforce? appeared first on Scalable autonomous AI solutions.

]]>
DataRobot Expands AI Capabilities with Agnostiq Acquisition https://reachhigher.ai/index.php/automation-4/ Mon, 17 Feb 2025 21:11:21 +0000 https://reachhigher.ai/index.php/automation-4/ DataRobot has announced its acquisition of Agnostiq, a Toronto-based startup specializing in distributed computing solutions, to enhance its capabilities in developing agentic AI applications and managing compute resources. DataRobot, which focuses on enterprise AI and AutoML platforms, aims to integrate Agnostiq’s open-source Covalent platform into its ecosystem. This integration will enable businesses to improve compute […]

The post DataRobot Expands AI Capabilities with Agnostiq Acquisition appeared first on Scalable autonomous AI solutions.

]]>
DataRobot has announced its acquisition of Agnostiq, a Toronto-based startup specializing in distributed computing solutions, to enhance its capabilities in developing agentic AI applications and managing compute resources. DataRobot, which focuses on enterprise AI and AutoML platforms, aims to integrate Agnostiq’s open-source Covalent platform into its ecosystem. This integration will enable businesses to improve compute orchestration across various environments (multi-cloud, on-premises, and hybrid) while utilizing different types of processors (CPUs, GPUs, NPUs).

The acquisition addresses challenges businesses face in transitioning to functional agentic AI applications, which often stem from managing fragmented tools and environments, leading to higher costs and inefficiencies. DataRobot’s CEO, Debanjan Saha, stated that integrating Agnostiq’s technology would allow AI teams to develop and manage agentic AI applications more effectively, producing tangible business results.

DataRobot’s platform caters to both technical and non-technical users, with features for automating tasks and customizing neural networks. By adopting agentic AI, DataRobot aims to simplify AI deployment while enhancing performance and reducing costs through dynamic compute orchestration. Agnostiq’s CEO, Oktay Goktas, expressed enthusiasm for the collaboration, emphasizing the potential for rapid AI solution development.

This acquisition is part of DataRobot’s broader strategy to solidify its presence in the enterprise AI market, following previous acquisitions like Algorithmia, Paxata, and ParallelM. Although Agnostiq has a background in quantum computing, the focus of this acquisition appears to shift toward AI and high-performance computing. The future role of quantum computing in Covalent under DataRobot remains uncertain.

The post DataRobot Expands AI Capabilities with Agnostiq Acquisition appeared first on Scalable autonomous AI solutions.

]]>