Decision Mechanics

Insight. Applied.

  • Services
    • Decision analysis
    • Big data analysis
    • Software development
  • Articles
  • Blog
  • Privacy
  • Hire us

Self-driving car from 1958

June 27, 2022 By editor

GM produced a self-driving car prototype…in 1958. There’s a short documentary about it. Required wires in the road rather than machine learning.

Presumably their PR machine said, "They’ll be commercially viable by 1959."

Filed Under: Artificial intelligence Tagged With: GM, machine learning, self-driving car

Sentient AI

June 14, 2022 By editor

Gary Marcus addresses the nonsense in the popular press about Google’s LaMDA AI system being sentient.

He leads with a great quote, from Abeba Birhane, that sums up the whole thing.

we have arrived at peak AI hype accompanied by minimal critical thinking

Filed Under: Artificial intelligence

Faith in technology

June 1, 2022 By editor

"AI" has been confusing people for over a century.

Nineteenth century British politicians demonstrated a complete lack of understanding of Charles Baggage’s difference engine, leaving him to comment,

On two occasions, I have been asked [by members of Parliament], ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a question.

I fear people in 2022 have the same blind faith in deep learning models.

Filed Under: Artificial intelligence

Decision fatigue

May 31, 2021 By editor

This Economist has an article this week on the dangers of decision fatigue.

Research suggests that people fall back into making "default" decisions when they are tired. Examples are cited from finance, law and medicine.

One thing that isn’t discussed is the obvious benefits of automated decision-making—computers don’t suffer from exhaustion. The more we can have computers advise decision-makers on routine decisions, the more humans can devote their limited energy to more complex cases.

The article notes that there may be value using software to monitor decisions and nudging people when the pattern of their decision-making changes. This is an interesting approach—have the computer critique the decision-making process rather than the decision itself.


Photo by Luis Villasmil on Unsplash

Filed Under: Artificial intelligence, Decision science Tagged With: decision fatigue

Data cascades—the impact of data mismanagement

May 3, 2021 By editor

mount of garbage

It’s data science, folks. It lives and dies by the quality of the data.

Google Research recently published a paper where they argue that machine learning solutions are being undermined by a lack of focus on data quality issues. They note that

[…] data is the most under-valued and de-glamorised aspect of AI

and that data is

[…] viewed as ‘operational’ relative to the lionized work of building novel models and algorithms.

Ironically, data science arose from statisticians’ disinterest in the collection and wrangling of data. Revisiting the sins of the father, I guess.

The Google researchers point to the prevalence of data cascades—upstream events that have compounding negative effects on project outcomes.

92% of AI researchers interviewed for the study had suffered from a data cascade.

Four categories of data cascade were identified.

  • Interacting with physical world brittleness
  • Inadequate application-domain expertise
  • Conflicting reward systems
  • Poor cross-organisational documentation

All of these issues conspire to rock the very foundations of the models we increasingly rely on.

Data quality is hard to get right. It’s a much harder problem than model development. And, while the specific choice of model is often unimportant, the same is never true for the data that is fed into it.

One reason data quality to so hard to achieve and maintain is that it’s a process problem—often involving multiple organisations and stakeholders.

As the authors of the study lament,

Data quality carries an elevated significance in high-stakes AI due to its heightened downstream impact, impacting predictions like cancer detection, wildlife poaching, and loan allocations.

We need to stop fetishising algorithms at the expense of data. Tutorials on machine learning libraries and Python are smeared across the Internet. We need to promote and reward good data hygiene.

The consequences of continuing to undervalue data work are stark.

Garbage in, garbage out.


Photo by Antoine GIRET on Unsplash

Filed Under: Artificial intelligence, Data analysis, Data science Tagged With: data cascade, data quality

  • 1
  • 2
  • 3
  • …
  • 5
  • Next Page »

Search

Subscribe to blog via e-mail

Subscribe via RSS

Recent posts

  • Percentages or proportions?
  • Data storytelling
  • Self-driving car from 1958
  • Sentient AI
  • Is functional programming more effective than object-orientated programming?

Copyright © 2023 · Decision Mechanics Limited · info@decisionmechanics.com