Support the show, consider donating:
BTC: 1CD83r9EzFinDNWwmRW4ssgCbhsM5bxXwg
ETH: 0x8cdb49ca5103Ce06717C4daBBFD4857183f50935
A significant part of the modern digital economy, is underpinned by machine learning models that are trained to perform tasks such as facial recognition, content curation, health diagnostics etc. Data to train machine learning models is the essential commodity of this century – a sentiment captured by epithets such as “”Data is the new oil””. Today’s dominant AI paradigm has companies focus their efforts on gathering data from their users in order to train models and monetise usage of the model. This model has many consequences such as loss of privacy for the user, consolidation of data in a handful of large companies, low access to data for startups and a fundamental impossibility of collecting sensitive data such as markers for depression.
Our guest, Andrew Trask, is building OpenMined – a platform that merges cryptographic techniques such as homomorphic encryption and multi-party computation and blockchain technology to create the ability to train ML models with private user data. OpenMined will allow AI companies of the future to develop models, have them trained on user data without compromising user privacy, and incentivise users to train their model. We walk through the OpenMined vision and its potential impact on AI business models and AI safety
Topics covered in this episode:
- Challenges with the current AI paradigm
- OpenMined’s vision to allow training of AI models with private user data
- How OpenMined works under the hood
- Applications enabled by OpenMined
- Current state and OpenMined hackathon
Episode links:
This episode was hosted by Brian Fabian Crain & Meher Roy, and is availble on YouTube, SoundCloud, and our website.