Welcome to MelsGoal

Important Note:

Opinions are fun. My friends tell me I am someone with lots of opinions and that's fine since I don't get mad at others when they disagree with me. In this same spirit I am interested in hearing yours views as long as you are able to share your views without boiling over. I look forward to hearing from you. I tend to write in the form of short essays most of the time, but contributions do not need to be in this same format or size. Some of the content here will date itself pretty quickly, other content may be virtually timeless, this is for the reader to judge.


Displaying 1 - 1 of 1



AI Made Me Do It                                                                                     Print this essay

Posted at: Aug/23/2019 : Posted by: mel mann

Related Category: Common Sense, My philosophy, Perspectives,

On our American Independence Day of 2019, a large internet payment processing company decided to lock a friend’s credit card account. Why did they do this? After an hour on the phone, their answer was that there was some suspicious activity on the account. What was this suspicious activity? Well, apparently my friend had spent a couple of hundred dollars in one day. Something she does regularly after she gets her paycheck. Who would have thought? Also the payments were to online businesses she had used before for products using this same payment processing company from the same IP (Internet Protocol) address she always uses and for similar types of products. She worked her way up to a supervisor at the customer service call center, who told her he had no idea why her account was locked.

None of this is very comforting in an era where people increasingly rely on computer algorithms they can’t even understand. More importantly the supervisor insisted he couldn’t unlock the account no matter the evidence that was presented to show herself as the legitimate owner.

Why did this suddenly happen with a company that this person has used for years? There are clues. The company shut down much of the utility of their system over the previous weekend for a big software upgrade. Then new behavior emerged. Their support staff couldn’t discern the reason for the new behavior. If you’re familiar with AI (Artificial Intelligence) and Machine Learning, you recognize the symptoms of a poorly trained machine learning algorithm and a staff unfamiliar with their software tools. There are three foundational methods used in artificial intelligence. They are neural networks, ontological reasoning, and statistical investigation.

Neural network reasoning is like newborns learning about the world and requires feedback to indicate “correct” and “incorrect”. This technique is powerful and quick but can be subject to the human flaw of spurious correlations since the “reasoning” is generally done against problem sets filled with noise. A neural net trained to find dogs in all pictures might key in on the presence of fur and floppy ears and miss dogs with pointy ears (false negative) or mischaracterize instances of rabbits as dogs (false positive). Therefore the training sets must be very carefully chosen, and the results legitimized with human judgement. The ability to quickly digest and characterize absurdly large data sets makes neural networks too valuable to not be used, and false positives become points for the software to be retrained.

The biggest challenge with neural networks is selecting or creating a viable set of training data that is large enough, diverse enough, and similar enough to the targeted data.

Ontological reasoning is a way of categorizing information in a more portable or hierarchical structure. Ontological reasoning is like algebra using logic and proofs based on deductive reasoning like Sherlock Holmes. You remember: If A=B, and B=C, then A=C. If Cyberpup is my personal dog, he is genetically related to all other dogs, and is neither a gorilla nor a bird and, given his relationship to me, is a house dog rather than a feral or wild dog. Clearly, in this hierarchy of the items Cyberpup is my pet dog. Ontological reasoning depends on relating something to something else that is also related to a known something to create a complete web and hierarchy of all things.

Building useable ontologies for the AI to learn from is often considered one of the “mysterious arts” because everything needs to be related to something. It can be like finding the philosopher who can distinguish between the essence of something and the traits that are attached to the thing in question. Think about that high school biology class that had you memorizing traits, and genomes.

Statistical investigation is finding trends (and sometimes even more usefully exceptions) that are invisible to normal human perception such as which drug has the best track record in treating liver cancer, or what is different about the three patients who went into remission using the drug that is least successful normally. There is also the notation of your credit card being used by a business that is 700 miles away from all the other business you have been recently using. Of course the first rule of statistics is that correlation does not mean causation, so statistics are best used as an investigative tool to find what data to look at, but not to draw absolute conclusions from.

As the name implies executing statistical investigation requires users skilled in the advantages and limits of various statistics. The calculation is easy these days, but the choice of which technique to use and the skill to interpret the results require a very talented and experienced statistician.

All three AI techniques are valuable, but we are only just beginning to understand how to apply them and train them when given millions and billions of data points to digest.

Unfortunately, my friend’s frustration is becoming the rule rather than the exception. Facebook, Twitter, and Youtube have all tried to use AI to police their sites for what they deem to be inappropriate content to a chorus of complaints about wrongful bans. Many of those bans were reversed with the same, “I can’t explain why that happened,” excuse. I have no doubt that the algorithms properly identified and banned inappropriate content 99.9% of the time. Unfortunately, for the other 0.1% we feel that our rights have been infringed on. The techies would call these “false positives,” the rest of us call it frustrating.

Our world has changed a lot in the last 25 years. Facebook and Youtube did not exist along with their parent the internet. In the western world the majority of information came from newspapers, libraries and publishing houses. All of these institutions had teams of researchers and fact-checkers to validate content before it went out the door. With the rise of the internet there are now tens of millions of people producing content and billions of transactions with very little if any validation. This is a treatise of 1616 word and it will be published to the internet without a peer review. I don’t believe I am lying to you, but there is no all-encompassing validation process to prevent me from doing just that.

U.S. banks are reporting that they see $50 million in fraudulent transactions a day that they must absorb. With a number like that I understand the desire to implement new technologies in the hopes of reducing this number. Ignoring the privacy issues, a computer’s ability to process large amounts of information and statistics would imply it is perfect for solving this problem. Unfortunately, one of the foremost rules of statistics is that “correlation does not mean causation.” There’s a reason AI tools are sometimes called “inference” engines, not “decision” engines. Statistics can tell us where to look and help us make decisions, but they are still only facts and therefore no substitute for human judgement.

In the 2004 film “I, Robot” adapted from the 1950 Isaac Asimov novel of the same title we are introduced to a modern society where robots have been integrated into daily life as much as we have become dependent on automobiles and smartphones in our current era. The dutiful robots are governed by 3 fundamental laws.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First and Second Law.

In the movie, the overarching supercomputer VIKI (Virtual Interactive Kinetic Intelligence) draws the only logical conclusion based on the 3 laws and years of accumulated data monitoring human behavior. To truly comply with the 3 laws man must be subjugated to be protected from his/her self-destructive and aggressive tendencies. This is a classic example of the kind of logical conclusion that an “inference” engine would draw. There is still clearly no substitute for the “decision” capabilities of human judgement.

We may pride ourselves that we have mimicked human learning with our machine learning, but we miss the basic truth that if there is ultimately such a thing as artificial intelligence based on how humans learn, there must be a balancing amount of artificial stupidity. Humans frequently make mistakes and invalid correlations. That’s why the whole of science is premised on repeatability of results rather than just inference and deduction. That is why neither the prosecutor nor the defense attorney is allowed to decide the guilt or innocence of an accused criminal, but rather a jury of the accused’s peers given along with a careful presentation of the facts of the matter at hand.

It is really quite amazing to note all the changes that technology has brought to modern life. But despite all the great things technology does, it cannot take responsibility. Machines and their algorithms have come a long way. They can learn through successful training strategies, but they can never take responsibility. Allowing machines to make decisions when the operator’s response is merely an “I don’t know” can only create more chaos. When the machine is making decisions, a human should be able to review what they were, else we have ceded responsibility. Ultimately, a human being decided to have them do that, and we are fools if we let that human being hide behind a smokescreen of machine learning or AI. It’s a poor workman who blames his tools. Ultimately, we have to be careful not to give too much responsibility to modern AI without ensuring that there is some level of human oversight and judgement.

In the meantime, I hope my friend can get her credit card activated again.

Comments (0)                                                                                                                                                    [Add Comment]



Napoleon Hill
Do not wait; the time will never be 'just right'. Start where you stand, and work with whatever tools you may have at your command, and better tools will be found as you go along.
 
Legal Stuff    Enter    Contact Me