Joanna Ng | Data, Truth, & AI

Joanna discusses some of the risks that come from putting too much trust in computers and artificial intelligence.

3 Likes

Great podcast. The blood pressure example made sense, as while 140 may be normal, it is also riskier than having a blood pressure of 120, and having a stroke at 70 may be normal, but I would rather have it put off until 85.

I loved the insight given into the world of AI by the discussion.

1 Like

spoken like a physician :rofl:

1 Like

I’m sitting here with Knox staring at me and remembering that when I check my blood pressure, if I wait two minutes and check it again with his head on my lap it always comes in fifteen or twenty points lower on the top reading and five or ten lower on the bottom, and I wonder how much data there is about that out in the data-sphere, and how much it would take for AI to start recommending that everyone should have a dog because it lowers people’s blood pressure.

3 Likes

Expenses for service dogs are tax-deductible. :slightly_smiling_face:

Yes, and thanks to those I may actually get a bigger deduction by enumerating for once.

Which reminds me, the tax “reform” was done foolishly: the standard deduction should have been made low while the individual exemption should have been pegged to 130% of the poverty level – that would have helped more people than they other way around.

1 Like

Yes, tax laws are stupid. It is silly to tax businesses, especially ones doing business only in the U.S. “Businesses don’t pay taxes, people do.” Tax a business? Up go prices. It’s hard to imagine how much real worth has been lost to the billable hours of tax accountants and tax lawyers! (At least they appear to be profitably employed…but profiting themselves and their offices only. ; - ) Improvements in the standard of living here and around the world come from invention and labor.

Okay, mini-rant over. :grin:

I once debated on the topic “Resolved, that the U.S. federal government should not tax businesses whose operations do not cross state lines”.

I was on the side in support; we lost because of the technicality that we couldn’t put forth an objective definition of operations that cross state lines, at least not one that satisfied the judges.

1 Like

Yeah, intrastate commerce would be tough to isolate from interstate, especially if the state line divides a town. :grin: I guess there is only one unincorporated town that qualifies, but there are some that are adjacent to one state line as well as some adjacent to two and one to three (:slightly_smiling_face:).

Thanks for this podcast. I found it very interesting. I worked at IBM Toronto Lab in the early 90’s.

I thought Ms. Ng raised several great points. In her list of key things that have made AI, as we know it today, I would add the state of the art of machine learning algorithms.

I agree that it would be a good idea to mandate that AI product offerings disclose the details of the machine learning algorithms they use AND the data with which they were trained. Then, in principle, we as humans, could make informed decisions on the suitability of the AI application for a particular purpose.

I take the point that AI has made possible exponential increases of availability of information, including misinformation. Also, that we, as a society, should put into place regulatory frameworks for AI, as we have done for the film industry. I don’t envy parents of young children these days, with the way data and media is used to exploit people. As always with people, no matter how much you regulate something, there will be an underground economy. So, I think it’s a good idea to teach children how to think critically and principles for decision-making, so that when they encounter a technology or situation that is new they have a basis to make a good decision.

I’m imagining an AI trained not with the internet as source but with say all the theology material in the Library of Congress. Might that provide an ethics base, or at least a moral one?

2 Likes

There’s one town near us that does cross the line directly.

1 Like

Yes, trained on the Library of Congress books on theology would be much better than “the internet”. That would be a worthwhile project.

I think the idea of regulatory frameworks is that disclosing that an AI product has been trained on “the internet” is not specific enough to make a decision on the suitability of a product for a serious purpose.

ChatGPT for example, was trained on “the internet”, but a group of people at OpenAI “curated” the training data. That is, they made decisions on which data was suitable for training. As a consumer of an AI product like ChatGPT, I would want to see the actual dataset that it was trained on. I would also want to know exactly which machine learning algorithms were used - that is cited published papers and specific parameters that were used. With that information, and enough expertise, I could in principle recreate the AI system I am using. Some may object that this is too much proprietary information to disclose, but there are a LOT of implementation details of a software system like ChatGPT, and no individual person could actually recreate the system and bypass the seller’s right to proprietary technology. But there is, in principle, enough information for me to make an informed decision on the AI system’s suitability for my purpose.

1 Like

How can AI chatbots be trusted at all?

When summarizing facts, ChatGPT technology makes things up about 3 percent of the time, according to research from a new start-up. A Google system’s rate was 27 percent.
NYT 11/6/23

Would you trust someone to give you legitimate data whom you knew lied 3% of the time, made up ‘alternative facts’?

1 Like

I think I’ve gotten made-up stuff from ChatGPT more like once in five or six times.

2 Likes

Yeah, and Bing and Bard with real-time access to the internet give out nonexistent URLs. :roll_eyes::grimacing::face_with_raised_eyebrow::angry:

1 Like

My favorite idea in the podcast was that new technologies like AI (and social media) should have clinical trials before they can be released to the general public. What is the argument against that (other than money)?

2 Likes

In reading all of these posts I can’t help but wonder if we shouldn’t apply these same rules to humans. Should we trust humans who have been trained on the internet, unsupervised? Should we put a man in the White House who only tells the truth 3% of the time?

Yes, there may be some way to use the idea of a clinical trial to help us decide if a new technology is harmful. I take your point that humanity should think seriously about the implications of new technologies. Perhaps we could come up with well-defined metrics for clinical trials of technologies.

The difficulty I see is in narrowing the scope in a clinical trial. If the scope is broad, like harm/benefit, how do we decide when the harms of a new technology outweigh the benefits? Take the example of social media. Clearly it has been used for great harm - perhaps the greatest being the spread of misinformation. However, social media has been used for great benefit in helping people come together for good purposes. Is the harm of social media outweighing the benefit? For that matter, could we have decided if the new technology of the 15th century, the printing press, would bring more harm than good? That too has been used to spread misinformation and hate. But I think we would all agree that the printing press brought great benefit to humanity.

1 Like