Tim Knowles
2 min readFeb 27, 2021

--

I don't know if it is your choice but that grey backgound and the font make it very hard to read your response. I like the definition you provide in this response better than the original.

There will be as many or more groups teaching AI's morality as there are companies making cars. There will be no single person or group who teach AI's morality. Yes, it will be opinions expressed in code. Eventually someone will allow/train AI's to rewrite/edit their all their code, not irrevocable algorithms . The original code will still skew the morality/ethics of the AI for generations but it will become less and less important with each generation unless something special is done.

I like to differentiate Intelligence from Wisdom and differentiate Intelligence from Knowledge. Just like Ignorance is easier to fix than Stupidity. Learning more quickly corrects ignorance that it does stupidity. None of this addresses Idiot Savants which is a better analog for current AI's and those that we will probably see in my lifetime very good at doing some hard things but almost useless at most other things.

I always wondered if you trained an AI the way you raise a child. Lived with it day to day, talked to it, taught it lessons, let it experiment and make mistakes. Let it observe you and model its behavior after yours and let it ask questions, if that might lead to a breakthrough. I know that is a very slow process but somethings take time to get them right and once it was done it could be just duplicated.

You could even (actually have too) teach the AI patience because I am sure it would try to rush you with more questions than you would or could answer. I expect you would quickly get the the childhood phase where you get response of "why" to your teachings. You also would have to teach it to not believe everything it downloads from the internet.

Patience is a virtue (ethics) but is it also a facet of Intelligence or Wisdom or all three. If you were training an AI for ethics would you use a positive model (model virtues) or the negative model( proscribe vices) or a hybrid model. In the hybrid model, which I think is the obvious choice how do you resolve paradox, like when it is bad to be good. Is the greater good always the answer, with uncertain outcomes, the AI can know what is the greater good, it can only make a statistical projection, this can suffer from garbage in garbage out.

TEK

--

--

Tim Knowles
Tim Knowles

Written by Tim Knowles

Worked in our nations space programs for more than 40 years

No responses yet