My team and I participate in a number of groups trying to devise an ethical framework that guides data-driven applications and algorithms. Participants ask what data? How is it collected? How is it used? Does it drive greater fairness? Does it perpetuate bias? Are the answers all hidden in a black box architecture.
I have taught and provided consulting services in the area of business ethics over the years but find most suggestions or frameworks fall a bit short.
Lately, I’ve been thinking it may be because much of the work pertains to big companies that have established revenue streams. That is, ethical considerations–including privacy policies–they face are less likely to be make or break than, say, startups.
At the same time I have found myself examining startups based on who (or what economic model) funds them. In the case of online lenders, it includes investors in the company as well as sources of financing for debt funds and/or acquisition of the loans. What pressures might these stakeholders place on the team around data practices or application of algorithms?
For example, if the valuation in a funding round is based on the scope of data then the pressure will be to control the data, declare that the business owns it even if it is deeply personal data about individuals. If revenues are based on loan fees and interest rates, priorities in data collected and algorithms applied against it may be toward whatever category or quantity will lead to biggest margins. This certainly seems to be true of lenders. For example, those claiming to be ethical alternatives to oft-criticized payday lenders will rationalize very high APRs based on “neutral” data and algorithms showing this borrower or class of borrowers bring greater risk of default.
In other words, follow the money.