Source: Braswell, Porter. “How Ai Will Change Diversity Equity and Inclusion - Fast Company.” Fast Company, 12 Apr. 2023, www.fastcompany.com/90879326/this-is-how-ai-will-disrupt-the-dei-industry.
AI can be all too human quick to judge, capable of error, vulnerable to bias. It's made by humans after all. Humans design the systems and tools that make new forms of AI faster. Humans are the data sources that make AI smarter. Humans will make decisions about how to use AI. The laws and standards, the tools, the ethics. Who benefits. Who gets hurt. "Made by Humans" explores our role in automation and the responsibilities we must take on.
The book offers major recommendations for actions that governments, businesses, and individuals can take to promote trustworthy and responsible artificial intelligence. Their recommendations include: creation of ethical principles, strengthening government oversight, defining corporate culpability, establishment of advisory boards at federal agencies, using third-party audits to reduce biases inherent in algorithms, tightening personal privacy requirements, using insurance to mitigate exposure to AI risks, broadening decision-making about AI uses and procedures, penalizing malicious uses of new technologies, and taking pro-active steps to address how artificial intelligence affects the workforce
Artificial intelligence (AI) is increasingly a business imperative. As AI tools propagate across nearly every industry and sector, so too do myriad ethical risks. Bias and discrimination, reputation damage and regulatory consequences, novel solutions delivering poor results that impact the bottom line--these and many other consequences can emerge from AI that falls short of ethical design, development, deployment, and use.
Machine learning algorithms and artificial intelligence influence many aspects of life today and have gained an aura of objectivity and infallibility. The use of these tools introduces a new level of risk and complexity in policy. This report illustrates some of the shortcomings of algorithmic decision making, identifies key themes around the problem of algorithmic errors and bias, and examines some approaches for combating these problems.
"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them."
In this book, the author argues that the structural inequalities reproduced in algorithmic systems are no glitch. They are part of the system design. This book shows how everyday technologies embody racist, sexist, and ableist ideas; how they produce discriminatory and harmful outcomes; and how this can be challenged and changed.
"In Algorithms of Oppression, Safiya Umoja Noble challenges the idea that search engines like Google offer an equal playing field for all forms of ideas, identities, and activities. Data discrimination is a real social problem. Noble argues that the combination of private interests in promoting certain sites, along with the monopoly status of a relatively small number of Internet search engines, leads to a biased set of search algorithms that privilege whiteness and discriminate against people of color, especially women of color."
This book tells the stories of this algorithmic battle for the truth and how it impacts individuals and society at large. In doing so, it weaves together the human stories and what's at stake here, a simplified technical background on how these algorithms work, and an accessible survey of the research literature exploring these various topics.
1701 Wright Street | Madison, Wisconsin 53704 | Libraries: 608.246.6640 | Student Achievement Centers: 608.246.6125 | College Info: 608.246.6100