Articles

Mapping the misuse of generative AI

Mapping the misuse of generative AI

Responsibility & Safety Published 2 August 2024 Authors Nahema Marchal and Rachel Xu New research analyzes the misuse of multimodal generative AI today, in order to help build safer and more responsible technologies Generative artificial intelligence (AI) models that can produce image, text, audio, video and more are enabling a new era of creativity and … Read more

Why Do You Need Cross-Environment AI Observability?

Why Do You Need Cross-Environment AI Observability?

AI Observability in Practice Many organizations start off with good intentions, building promising AI solutions, but these initial applications often end up disconnected and unobservable. For instance, a predictive maintenance system and a GenAI docsbot might operate in different areas, leading to sprawl. AI Observability refers to the ability to monitor and understand the functionality … Read more

A Leader’s Checklist for Responsible AI

A Leader’s Checklist for Responsible AI

In the global race to harness the power of AI, governments are increasingly stepping up to ensure its safe and ethical use through a range of regulatory initiatives. From the European Union’s AI Act to the United States’ AI Bill of Rights and Singapore’s AI Verify framework, these efforts aim to create a robust foundation for AI governance. These regulations are designed to address the risks associated with AI, such as bias, discrimination, and lack of transparency, while promoting fairness and accountability across industries.

Read more

Marek Rosa – dev blog: GoodAI LTM Benchmark v3 Released

Marek Rosa – dev blog: GoodAI LTM Benchmark v3 Released

 The main purpose of the GoodAI LTM Benchmark has always been to serve as an objective measure for our progress in the development of agents capable of continual and life-long learning. However, we also want it to be useful for anyone developing agents of this type. In order to facilitate that, we have oriented this … Read more

The Financial Challenges of Leading in AI: A Look at OpenAI’s Operating Costs

The Financial Challenges of Leading in AI: A Look at OpenAI’s Operating Costs

OpenAI is currently facing significant financial challenges. For example, in 2023, it was reported that to maintain its infrastructure and run its flagship product, OpenAI pays around $700,000 per day. However, in 2024, the company’s total spending on inference and training could reach $7 billion, driven by increasing computational demands. This large operational cost highlights … Read more

How Much Data Is Needed to Train Successful ML Models in 2024?

How Much Data Is Needed to Train Successful ML Models in 2024?

A working AI model is built on solid, reliable, and dynamic datasets. Without rich and detailed AI training data at hand, it is certainly not possible to build a valuable and successful AI solution. We know that the project’s complexity dictates, and determines the required quality of data. But we are not exactly sure how … Read more

Why Apple Intelligence Might Fall Short of Expectations? | by PreScouter

Why Apple Intelligence Might Fall Short of Expectations? | by PreScouter

As the tech world buzzes with the unveiling of Apple Intelligence, expectations are soaring. The leap from iPhone to AI-Phone paints a picture of a future where our devices aren’t just tools but partners capable of anticipating our needs and actions. Yet, amidst this enthusiastic anticipation, it’s crucial to examine the potential pitfalls that might … Read more

Do LLMs Reign Supreme in Few-Shot NER? Part III

Do LLMs Reign Supreme in Few-Shot NER? Part III

In our previous blog posts in the series, we have described traditional methods for few-shot named entity recognition (NER) and discussed how large language models (LLMs) are being used to solve the NER task. In this post, we close the gap between these two areas and apply an LLM-based method for few-shot NER. As a … Read more

Explainable AI for detecting and monitoring infrastructure defects

Explainable AI for detecting and monitoring infrastructure defects

By Sandrine Perroud AI can help improve railway safety by enabling automated inspections of tracks, crossties, ballasts and retaining walls. Researchers at EPFL’s Intelligent Maintenance and Operations Systems (IMOS) Laboratory have developed an AI-driven method that improves the efficiency of crack detection in concrete structures. Their research, recently published in Automation in Construction, introduces a … Read more