Advertisement: CJBS mid banner
Advertisement: S-Tech mid banner 3
Barr Ellison Solicitors – commercial property
Advertisement: RSM mid banner
Advertisement: Simpsons Creative
Advertisement: Kao Data Centre mid banner
Advertisement: partnersand mid banner
Advertisement: Cambridge Network mid banner
Advertisement: TTP
Advertisement: Excalibur Healthcare mid banner
Advertisement: SATAVIA mid banner
Cambridgeand mid banner advertisement
Advertisement: EBCam mid banner
Mid banner advertisement: BDO
Advertisement: Wild Knight Vodka
ARM Innovation Hub
Advertisement: Bar Ellison mid banner property
Advertisement: Mogrify mid banner
Advertisement: HCR Hewitsons mid banner
10 June, 2022 - 17:00 By Tony Quested

Turing ace explores Mission Critical Machine Learning at Cambridge Sessions showcase

Mission Critical Machine Learning is the subject of the next Cambridge Sessions technology series fashioned by Featurespace, the creator of world-leading analytics to fight online financial crime.

The June 14 blockbuster at the Cambridge University Union from 5pm stars Mark Girolami, Chief Scientist at the Turing Institute and Derek McAuley, Director of Digital Economy at Nottingham University.

The session will explore explainability, governance and bias. Mark will be sharing a story around the work the Turing Institute did with the UK government during the Covid-19 lockdown .

Guests can register at: https://www.featurespace.com/rsvp-the-cambridge-sessions-mission-critical-machine-learning/

Digging deeper into the rationale of the government project titled ‘Odysseus’, Girolami told Business Weekly: “ML is just a new class of algorithms and not all algorithms are mission critical. Conversely in terms of regulatory compliance being an important part of mission delivery, I don’t need ML to be unlawful or unethical. 

“I can assess that with Excel and the sort function. The specific challenge with ML is understanding it well enough to know you are compliant.”

McAuley whose work spans the issues of ethics, identity, privacy and the regulations that surround them explained: “Bias is not universally a problem; for example, we discriminate for those over 60 in the transport system; or families in the tax system; society is full of biases in favour of some groups at some times, because that is what society has decided it wants to do.

“What we are concerned about is algorithms that are asserted to be fair against some criteria of universal utility but simply fail on that. Examples include gender and racial bias in face recognition such as those found in the UK passport photo checker.

“Such biases are often simply discovered in the training data sets not being representative of the population to whom the algorithm is applied.

“All industries are regulated and live in a complex landscape of generic (like GPDR) and sector specific (like FSA) regulation. AI and ML do not get a pass on this regulation.

‘However they work, they must exist in ‘systems’ that comply – hence the major concerns with AI and ML is understanding their impact on the system’s compliance.

“While there have been calls to regulate AI and ML in a generic manner, it makes no sense without considering what the data represents, what the algorithm does and what the consequences are.

“In this regard many view the EU AI Act as misnamed, as really it is looking at specific classes of use, and its definitions are viewed by many as including algorithms that do automated decision making whether ML or not.”

Newsletter Subscription

Stay informed of the latest news and features