Remember years ago when self-checkout was on the rise? My parents would always opt to go to an actual cashier, remarking with a sigh, “they’re replacing us.” Fast forward, and, as dramatic as it sounded at the time, they may have been right. The CVS in my neighborhood now has 6 self checkout kiosks and one employee at the register, who only helps when the kiosks aren’t working. As we move towards an I Robot future, the use of AI has become more and more accepted, and even expected. But what exactly is AI, and what can we do to prepare for its eventual takeover?
As more and more industries incorporate AI into their practices – it’s essential to understand how AI can be used – and abused. Andrew Smith, Director of the FTC Bureau of Consumer Protection, suggests a wide variety of ways businesses can use AI responsibly. Smith breaks it down very simply: use AI transparently, thoroughly, and fairly. Remain accountable for your use of AI – taking the good, the bad, and the amazing (or the FTC will do it for you).
Building a foundation based on pragmatic ethical principles is the key to socially competent and sustainable business. Whether it be in your hiring practices, your policy making, or even consumer outreach – you must know the ins and outs of EVERYTHING happening in your business. Adopting an integrity-based model will put you ahead of the curve. When using something new and exciting, like Chat GPT, remember these programs are human coded. And just like all of us, they are evolving too.
Section I. What is AI
Artificial intelligence, or AI, is, as Michael Atleson, Attorney for the FTC Division of Advertising Practices, puts it an “ambiguous term” that has been used to explain the programs and processes developed through the “science and engineering… of intelligent machines.” IBM argues it “enable[s] problem-solving.” The Federal Trade Commission, however, warns that even with all of its usefulness, AI is a problem in and of itself.
What is supposed to be a neutral, unbiased collection of limitless information has turned out to be not so neutral or unbiased. Researchers have sounded the alarm that due to the human programing of these complex machines – they may not be as foolproof as we all hoped. We can’t deny the benefits of AI, but adequately using them is necessary for it to be successfully integrated in our society.
The Risks and Rewards
The risks and rewards of AI cannot be underestimated. Due to its unlimited knowledge and ability to process enormous amounts of data, AI tools promote increased efficiency and productivity that free up human resources. It can also improve public safety through its autonomous ability, for example revolutionizing travel through self driving cars. And of course, AI opens up new possibilities for innovation and discovery, assisting in anything from scientific research, space exploration, and medical revelations.
But, as with anything, there’s always a risk with such growing and evolving technology. Many growing concerns question the privacy and data retention of AI programs. As equally worrying is the lack of transparency and accountability some systems have. Where is this information coming from and how accurate is it? Troubling bouts of bias and discrimination have rocked the technological world as well, serving as a reminder that these systems are still built with human bias.
The ethics of AI use are often challenged, and rightfully so. But understanding how to leverage the power of AI with integrity is the building block to creating a more empowered world.
Section II. Legal & Compliance Challenges with AI
A. Data Integrity
The integrity of AI data has become more and more questionable as we face the reality that some data is biased, baseless, and just plain bad. Incorrect and incomplete information can flood systems – pushing out untrue or misinformed guidance. Sometimes we may forget, but the person behind the system is what makes the system work. If a system is created, programmed, or updated with bias, whether it be conscious or not, the output will always reflect that. AI literacy training.
B. Data Privacy
AI systems are comprised of huge amounts of data, which is what makes them so expansive and full of possibilities. This also makes them a huge risk when it comes to privacy and security. Potentially sensitive data can be entrusted with these programs (hello, Face ID). And what happens if that information is mishandled or compromised? What if it lands in the wrong hands? Data security measures must be in place to protect the integrity and confidentiality of the data used in AI systems.
C. Hallucinations – Misinformation/Disinformation
AI data misinformation refers to the phenomenon where AI systems propagate or generate false or misleading information. This can occur due to various reasons, including biased training data, adversarial attacks, or flaws in the AI algorithms themselves.
Content that comes out of these platforms are not necessarily accurate, even if it seems to be. If too heavily relied upon, this can lead widespread disinformation. Hallucinations are a confident response by an AI that does not seem to be justified by its information input. This again leads to biased and uninformed data circulating. All data is biased, but how can we account for the inherent problems?
D. Intellectual Property
AI and its legal/regulatory framework is forever evolving. The intellectual property risks posed by AI use range from infringement issues to ownership rights. Because of the overlap of technological systems, questions of who owns what arise often. Determining ownership of AI generated works can be challenging. Generally, copyright law grants authorship to human creators, but when AI is involved, it becomes unclear whether the AI, the developer, or the user should be considered the author or owner. On the flip side, issues arise when it comes to determining who owns the data used to train AI systems and who has the right to access and use that data. Because AI systems pull data from a wide variety of resources, lines can become blurred.
With new technology comes new legal issues, some of which the law is still struggling to catch up with. How do we manage conflicting artificial intelligence and intellectual property interest in an era of the open internet? Recently, a new case emerged that left a lot of us wondering what’s next for content creators when Getty Images sued the creators of AI art tools for violating copyright laws when they scraped its content.
And further, Microsoft, GitHub and OpenAI are currently being sued in a class action suit that accuses them of violating copyright law by allowing Copilot, a code-generating AI system, to regurgitate licensed code snippets without providing credit. Again, we see duel between creators where both have a major stake in the outcome.
Section III. Ethical Principles for the Responsible Design, Deployment, and Use of Artificial Intelligence Capabilities.
-
Objective and Equitable
What is supposed to be a neutral, unbiased collection of limitless information has turned out to be not so neutral or unbiased. Researchers have sounded the alarm that due to the human programing of these complex machines – they may not be as foolproof as we all hoped. A recent US Department of Commerce study found that people of color were often misidentified, or not identified at all, by AT facial recognition. In one instance, darker skin tones weren’t even recognized by automatic hand soap dispensers. Further, a study conducted by UC Berkeley found mortgage algorithms consistently charging Latino and Black borrowers higher interest rates. These concerning issues are just scratching the surface of bias in AI.
Data journalism professor Meredith Broussard says “we talk about bias in tech like it’s… a glitch” but that social problems in AI are much more than that. She pushes that these problems are not easily fixed code issues, but rather, highlights of human programing that reflects like innate biases we all have. Broussard stresses that proposed AI regulation in the EU can be a major game changer in how we calculate risk of AI use. This regulation would categorize different forms and uses of AI into high risk and low risk classifications, allowing innovators to know how we should exactly be incorporating AI in our systems.
2. Transparency and Accountability
AI processes need to be understandable on a general level. This only happens when the development phase is clearly documented, outlining the algorithm’s ins and outs in a transparent way. This is the best way to ensure the public is able to trust and understand AI, further allowing us to use it most effectively.
And of course, with that transparency comes accountability. Companies that develop, and even use, AI must be able to explain the research of information, what information was used and why, and how that information was implemented in further decision making.
3. Valid and Reliable
AI systems can lack validity in many ways, mainly the inability to explain the understands of the internal workings and decision-making processes of these models. This leads to a limited comprehension of contextual information, which greatly impacts reliability.
4. Secure and Resilient
System security is one of the most pressing issues when it comes to AI. Cyber security must be at the forefront of development and oversight to fend off cyber attacks, data leaks and breaches, data poisoning or model stealing. Addressing these security and resiliency issues requires a multi-faceted approach, including robust security protocols, rigorous testing and validation, secure data handling practices, and ongoing monitoring and response mechanisms to detect and mitigate potential attacks or vulnerabilities.
Conclusion
By considering these ethical guidelines during both the development and deployment phases, companies can ensure that their AI tools are responsibly created and used, aligning with integrity based values and principles. Ensuring that AI systems are transparent, fair, and accountable, while prioritizing privacy and data protection, copyright laws, and freedom of information, can help build trust in AI tools and ultimately promote their responsible development.
0 Comments