Turing ace explores Mission Critical Machine Learning at Cambridge Sessions showcase
Mission Critical Machine Learning is the subject of the next Cambridge Sessions technology series fashioned by Featurespace, the creator of world-leading analytics to fight online financial crime.
The June 14 blockbuster at the Cambridge University Union from 5pm stars Mark Girolami, Chief Scientist at the Turing Institute and Derek McAuley, Director of Digital Economy at Nottingham University.
The session will explore explainability, governance and bias. Mark will be sharing a story around the work the Turing Institute did with the UK government during the Covid-19 lockdown .
Guests can register at: https://www.featurespace.com/rsvp-the-cambridge-sessions-mission-critical-machine-learning/
Digging deeper into the rationale of the government project titled ‘Odysseus’, Girolami told Business Weekly: “ML is just a new class of algorithms and not all algorithms are mission critical. Conversely in terms of regulatory compliance being an important part of mission delivery, I don’t need ML to be unlawful or unethical.
“I can assess that with Excel and the sort function. The specific challenge with ML is understanding it well enough to know you are compliant.”
McAuley whose work spans the issues of ethics, identity, privacy and the regulations that surround them explained: “Bias is not universally a problem; for example, we discriminate for those over 60 in the transport system; or families in the tax system; society is full of biases in favour of some groups at some times, because that is what society has decided it wants to do.
“What we are concerned about is algorithms that are asserted to be fair against some criteria of universal utility but simply fail on that. Examples include gender and racial bias in face recognition such as those found in the UK passport photo checker.
“Such biases are often simply discovered in the training data sets not being representative of the population to whom the algorithm is applied.
“All industries are regulated and live in a complex landscape of generic (like GPDR) and sector specific (like FSA) regulation. AI and ML do not get a pass on this regulation.
‘However they work, they must exist in ‘systems’ that comply – hence the major concerns with AI and ML is understanding their impact on the system’s compliance.
“While there have been calls to regulate AI and ML in a generic manner, it makes no sense without considering what the data represents, what the algorithm does and what the consequences are.
“In this regard many view the EU AI Act as misnamed, as really it is looking at specific classes of use, and its definitions are viewed by many as including algorithms that do automated decision making whether ML or not.”