Fake News and AI

Fake News and AI

AI may be a possible solution, or another contributor to, “fake news”. The proposed use of AI in fact-checking is only one example of AI that needs to be considered as part of the United States’ future in AI development. A recent development in Washington, DC sounds an alarm concerning the future of the US’ economic strength and national security as they relate to AI. On February 11, 2019, President Trump signed Executive Order 13859 entitled “Maintaining American Leadership in Artificial Intelligence”. [Federal Register, Vol. 84, No. 31, February 14, 2019, pages 3967-3972] (hereinafter, the “AI Executive Order”). The AI Executive Order declares that the US is the predominant power in AI research and must remain so. “Continued American leadership in AI is of paramount importance to maintaining the economic and national security of the United States and to shaping the global evolution of AI in a manner consistent with our Nation’s values, policies, and priorities.” [Id., at 3967]. The federal government is to coordinate with academia and private industry in the development of AI technology and to develop appropriate technical standards and reduce barriers to effective testing and deployment of AI technologies. At the same time, the government must engage in fostering public trust and confidence in AI technologies and “…protect civil liberties, privacy and American values in the application [of AI]…” [Id., at 3967].

The AI Executive Order seeks to promote research and development in AI both inside and outside of the federal government. The AI Executive Order also notes the importance of teaching high school, undergraduate and graduate students about AI through sponsorship of educational grants within federal fellowship and service programs. [Id. at 3971]. While the simple use of AI to detect “fake news” may not be as useful as some may think, it is clear that the current US administration considers AI research and deployment to be of critical importance and must be fostered in the private and public sectors.

In late 2017, the Chinese government proclaimed that China plans to become the world leader in AI by 2030. [“China’s Vision for The Next Generation of Artificial Intelligence”, The National Law Review, March 25, 2018]. As the United States became the “cradle” of internet development in the 1990s, the Trump Administration sees the US as the cradle of AI, as well. Trump’s focus on China as a potential economic and military enemy may have led him to respond to China’s AI plan with the AI Executive Order.

Will the 2020 election results lead to the nurturing of AI development through federal assistance and “laissez-faire” or draw boundaries around AI applications and use of AI data? US House Democrats introduced the “Algorithmic Accountability Act of 2019” (“3A Act”) to regulate AI development and to protect the privacy and security of the resulting AI data. [See, “Keeping an Eye on Artificial Intelligence Regulation and Legislation”, The National Law Review, June 14, 2019]. If passed by both chambers of Congress and signed by the President, the 3A Act would authorize the Federal Trade Commission (“FTC”) to regulate covered entities that use automated decision systems to make decisions about consumers.

“Covered Entities” include persons, partnerships or corporations over which the FTC has jurisdiction under the FTC Act that had more than $50 million in gross receipts for the past three years, possesses or controls personal information about more than one million consumers or one million consumer devices, or is a data broker or other commercial entity that, as a substantial part of its business, collects, assembles, or maintains personal information concerning an individual who is not a customer or an employee of that entity in order to sell or trade the information or provide third-party access to the information. In the context of the 3A Act, “personal information” is any information, regardless of how the information is collected, inferred, or obtained that is reasonably linkable to a specific consumer or device. The drafters of the 3A Act appreciate the fact that aggregated data or “big data” can lead to dangerous outcomes that marginalize, stigmatize or “red-line” millions of consumers.

The 3A Act would require covered entities to conduct (i) automated decision system impact assessments and (ii) data protection impact assessments on “high risk automated decision systems”. The required data protection impact assessments are studies evaluating the extent to which an information system protects the privacy and security of the personal information the system processes.

“High-risk” systems are defined as any automated decision system that, taking into account the novelty of the technology used and the nature, scope, context, and purpose of the automated decision system, poses a significant risk to the privacy or security of personal information of consumers.

High-risk automated decision systems may involve the personal information of a significant number of consumers in such areas as: race, color, national origin, political opinions, religion, trade union membership, genetic data, biometric data, health, gender, gender identity, sexuality, sexual orientation, criminal convictions or arrest.

The 3A Act also considers to be “high-risk automated decision systems” any AI system that “systematically monitors a large, publicly accessible physical space.”

A “high risk automated decision system” also includes an automated decision system that makes decisions, or facilities human decision making, based on systematic and extensive evaluations of consumers, including attempts to analyze or predict sensitive aspects of their lives. “Sensitive aspects” of a consumer’s life include work performance, economic situation, health, personal preferences, interests, behavior, location, or movements that alter legal rights of consumers or otherwise significantly impact consumers.

The 3A Act states that, whenever reasonably possible, covered entities should conduct all such assessments “in consultation with external third parties, including independent auditors and independent technology experts.” Covered entities would need to “reasonably address” in a timely manner the results of such impact assessments.

The 3A Act would authorize state attorneys general to bring a civil action on behalf of residents of that AG’s state in the appropriate federal district court to obtain relief. In addition, the 3A Act would not preempt any state law or forbid state investigations.

In effect, the 3A Act would establish the same fractured state/federal enforcement regime now experienced in the health care industry. State laws may in fact provide tougher standards than The Health Insurance Availability and Accountability Act of 1996 (“HIPAA”); HIPAA provides only a “floor” for the protection of “protected health information” or “PHI”.

The 3A Act would make it unlawful for any covered entity to attempt to make consumers otherwise permit the covered entity to do with the protected information what the 3A Act otherwise forbids. No “shrink wrap” waiver or acceptance of data uses by the consumer can be attempted by the covered entity to thwart the 3A Act’s intent. Perhaps that is the most interesting pro-consumer effort of the 3A Act.

AI developers must monitor proposed state, federal and international laws on a regular basis as regulators adapt to the AI technology.