> AI might replicate human decisions without improving on them, carrying forward the same biases
Definitely true that by default models have the same biases as the data they're trained on, but I think one nice property is that you can directly test counterfactuals. If you have a case with a Black defendant, you can check if the judgement would've been different had the defendant been white.
That's not the be-all-and-end-all cure for bias/discrimination, but it is something useful that we can't really do with human decision-makers.
> He claims that human consciousness enables us to perceive certain truths – such as those Gödel showed to be unprovable within formal systems – in ways that no algorithmic process can replicate.
To my understanding it's not that there are some fixed statements which we can see are true but no formal systems can, but rather that Gödel statements are constructed for a specific system and can be proven fine by other systems. Humans aren't immune to the same trick: can you prove "You cannot prove this statement" or its inverse? You have to give up consistency or completeness (or both) in your reasoning, which silicone-based systems can just as carbon-based systems do.
> AI might replicate human decisions without improving on them, carrying forward the same biases
Definitely true that by default models have the same biases as the data they're trained on, but I think one nice property is that you can directly test counterfactuals. If you have a case with a Black defendant, you can check if the judgement would've been different had the defendant been white.
That's not the be-all-and-end-all cure for bias/discrimination, but it is something useful that we can't really do with human decision-makers.
> He claims that human consciousness enables us to perceive certain truths – such as those Gödel showed to be unprovable within formal systems – in ways that no algorithmic process can replicate.
To my understanding it's not that there are some fixed statements which we can see are true but no formal systems can, but rather that Gödel statements are constructed for a specific system and can be proven fine by other systems. Humans aren't immune to the same trick: can you prove "You cannot prove this statement" or its inverse? You have to give up consistency or completeness (or both) in your reasoning, which silicone-based systems can just as carbon-based systems do.
"Gödel’s incompleteness theorems apply not only to AI, but to any ethical reasoning framed within a formal system." - no mathematician ever
Do the axioms of ethics allow encoding Peano Arithmetic?
If they didn't, then this would mean ethics couldn't talk about numbers and addition, which would be very strange.