Friday, April 26, 2024

A useful overview?

https://www.bespacific.com/the-legal-ethics-of-generative-ai/

The Legal Ethics of Generative AI

Perlman, Andrew, The Legal Ethics of Generative AI (February 22, 2024). Suffolk University Law Review, Forthcoming, Available at SSRN: https://ssrn.com/abstract=4735389 or http://dx.doi.org/10.2139/ssrn.4735389

The legal profession is notoriously conservative when it comes to change. From email to outsourcing, lawyers have been slow to embrace new methods and quick to point out potential problems, especially ethics-related concerns. The legal profession’s approach to generative artificial intelligence (generative AI) is following a similar pattern. Many lawyers have readily identified the legal ethics issues associated with generative AI, often citing the New York lawyer who cut and pasted fictitious citations from ChatGPT into a federal court filing. Some judges have gone so far as to issue standing orders requiring lawyers to reveal when they use generative AI or to ban the use of most kinds of artificial intelligence (AI) outright. Bar associations are chiming in on the subject as well, though they have (so far) taken an admirably open-minded approach to the subject. Part II of this essay explains why the Model Rules of Professional Conduct (Model Rules) do not pose a regulatory barrier to lawyers’ careful use of generative AI, just as the Model Rules did not ultimately prevent lawyers from adopting many now-ubiquitous technologies. Drawing on my experience as the Chief Reporter of the ABA Commission on Ethics 20/20 (Ethics 20/20 Commission), which updated the Model Rules to address changes in technology, I explain how lawyers can use generative AI while satisfying their ethical obligations. Although this essay does not cover every possible ethics issue that can arise or all of generative AI’s law-related use cases, the overarching point is that lawyers can use these tools in many contexts if they employ appropriate safeguards and procedures. Part III describes some recent judicial standing orders on the subject and explains why they are ill-advised. The essay closes in Part IV with a potentially provocative claim: the careful use of generative AI is not only consistent with lawyers’ ethical duties, but the duty of competence may eventually require lawyers’ use of generative AI. The technology is likely to become so important to the delivery of legal services that lawyers who fail to use it will be considered as incompetent as lawyers today who do not know how to use computers, email, or online legal research tools.”





Real rules on deepfakes?

https://www.bespacific.com/deepfakes-in-the-courtroom/

Deepfakes in the courtroom

Ars Technica: “US judicial panel debates new AI evidence rules Panel of eight judges confronts deep-faking AI tech that may undermine legal trials. On Friday, a federal judicial panel convened in Washington, DC, to discuss the challenges of policing AI-generated evidence in court trials, according to a Reuters report. The US Judicial Conference’s Advisory Committee on Evidence Rules, an eight-member panel responsible for drafting evidence-related amendments to the Federal Rules of Evidence, heard from computer scientists and academics about the potential risks of AI being used to manipulate images and videos or create deepfakes that could disrupt a trial. The meeting took place amid broader efforts by federal and state courts nationwide to address the rise of generative AI models (such as those that power OpenAI’s ChatGPT or Stability AI’s Stable Diffusion ), which can be trained on large datasets with the aim of producing realistic text, images, audio, or videos. In the published 358-page agenda for the meeting, the committee offers up this definition of a deepfake and the problems AI-generated media may pose in legal trials..”





How should you regulate an AI that might grow into a person?

https://coloradosun.com/2024/04/25/colorado-generative-ai-artificial-intelligence-senate/

Colorado bill to regulate generative artificial intelligence clears its first hurdle at the Capitol

A Colorado bill that would require companies to alert consumers anytime artificial intelligence is used and to add more protections to the budding AI industry cleared its first legislative hurdle late Wednesday, even as critics testified it could stifle technological innovation in the state.

At the end of the evening, most sides seemed to agree: The bill still needs work.



Thursday, April 25, 2024

Congress is uncomfortable with TikTok.

https://www.theverge.com/2024/4/24/24139036/biden-signs-tiktok-ban-bill-divest-foreign-aid-package

Biden signs TikTok ‘ban’ bill into law, starting the clock for ByteDance to divest it



(Related) President Biden is comfortable with TikTok.

https://www.nbcnews.com/politics/joe-biden/biden-campaign-keep-using-tiktok-signed-ban-law-rcna149158

Biden campaign plans to keep using TikTok through the election





And then what? Do we trust it enough to send them the location and date of the next insurrection?

https://www.nationalreview.com/corner/good-news-ai-can-apparently-spot-conservatives-on-sight-via-facial-recognition-technology/

Good News: AI Can Apparently Spot Conservatives on Sight via Facial Recognition Technology



Wednesday, April 24, 2024

Consent is fiction.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4333743

Murky Consent: An Approach to the Fictions of Consent in Privacy Law

Consent plays a profound role in nearly all privacy laws. As Professor Heidi Hurd aptly said, consent works “moral magic” – it transforms things that would be illegal and immoral into lawful and legitimate activities. As to privacy, consent authorizes and legitimizes a wide range of data collection and processing.

There are generally two approaches to consent in privacy law. In the United States, the notice-and-choice approach predominates; organizations post a notice of their privacy practices and people are deemed to consent if they continue to do business with the organization or fail to opt out. In the European Union, the General Data Protection Regulation (GDPR) uses the express consent approach, where people must voluntarily and affirmatively consent.

Both approaches fail. The evidence of actual consent is non-existent under the notice-and-choice approach. Individuals are often pressured or manipulated, undermining the validity of their consent. The express consent approach also suffers from these problems – people are ill-equipped to decide about their privacy, and even experts cannot fully understand what algorithms will do with personal data. Express consent also is highly impractical; it inundates individuals with consent requests from thousands of organizations. Express consent cannot scale.

In this Article, I contend that most of the time, privacy consent is fictitious. Privacy law should take a new approach to consent that I call “murky consent.” Traditionally, consent has been binary – an on/off switch – but murky consent exists in the shadowy middle ground between full consent and no consent. Murky consent embraces the fact that consent in privacy is largely a set of fictions and is at best highly dubious.

Because it conceptualizes consent as mostly fictional, murky consent recognizes its lack of legitimacy. To return to Hurd’s analogy, murky consent is consent without magic. Rather than provide extensive legitimacy and power, murky consent should authorize only a very restricted and weak license to use data. Murky consent should be subject to extensive regulatory oversight with an ever-present risk that it could be deemed invalid. Murky consent should rest on shaky ground. Because the law pretends people are consenting, the law’s goal should be to ensure that what people are consenting to is good. Doing so promotes the integrity of the fictions of consent. I propose four duties to achieve this end: (1) duty to obtain consent appropriately; (2) duty to avoid thwarting reasonable expectations; (3) duty of loyalty; and (4) duty to avoid unreasonable risk. The law can’t make the tale of privacy consent less fictional, but with these duties, the law can ensure the story ends well.





Tools & Techniques. (Talking gooder to your AI)

https://www.makeuseof.com/ai-prompting-tips-and-tricks-that-actually-work/#explain-what-hasn-39-t-worked-when-you-39-ve-prompted-in-the-past

7 AI Prompting Tips and Tricks That Actually Work

A whole new world of prompt engineering is springing into life, all dedicated to crafting and perfecting the art of AI prompting. But you can skip the tricky bits and improve your AI prompting game with these tips and tricks.





Tools & Techniques. Soon, humans not required.

https://www.police1.com/police-products/police-technology/software/report-writing/axon-releases-draft-one-ai-powered-report-writing-software

Axon releases Draft One, AI-powered report-writing software

Axon has announced the release of Draft One, a new software product that drafts police report narratives in seconds based on auto-transcribed body-worn camera audio, according to a press release.

Reporting is a critical component of good police work, however, it has become a significant part of the job. Axon found that every week officers in the U.S. can spend up to 40% of their time — or 15 hours per week — on what is essentially data entry.





Tools & Techniques.

https://www.lawnext.com/2024/04/launching-today-the-first-meeting-bot-specifically-for-legal-professionals-for-use-in-depositions-hearings-and-more.html

Exclusive: Launching Today Is The First Meeting Bot Specifically for Legal Professionals, for Use In Depositions, Hearings, and More

You may have noticed of late that many of your video meetings have an unfamiliar attendee — a meeting bot, invited by one of the human participants, that produces a recording or transcript when the meeting is over. But while there are several such products on the market, none have been developed to meet the specific needs of legal professionals.

That changes today with the beta launch of CoCounsel.ai, the first legally nuanced meeting bot. It can join a legal event such as a deposition, hearing or arbitration, and it uses legal-specific AI speech-to-text to provide a legally formatted, highly accurate real-time transcript, along with features such as bookmarking, tagging and archiving.



Tuesday, April 23, 2024

This seems to be dominating the news, but I’m not going to spend much time with it.

https://www.bespacific.com/at-the-top-of-the-ticket-a-criminal-defendant/

At the Top of the Ticket, a Criminal Defendant

Greg Olear. Trump may well be a convicted felon by Election Day. He’s still the GOP nominee. “Yesterday, open statements were heard in the case of The People of the State of New York v. Donald J. Trump. The defendant—a fixture in the New York tabloids for decades, a former reality TV star, and, improbably, the 45th President of the United States—is accused of “the crime of FALSIFYING BUSINESS RECORDS IN THE FIRST DEGREE, in violation of Penal Law §175.10,” a Class E felony. There are 34 counts in the indictment, each one specifying a unique instance of Trump running afoul of the law… A Class E felony is as low-rung as it sounds. This isn’t instigating a coup against our democracy, or making off with top secret documents, or bullying Georgia election officials to ensure that an election went his way. In the grand scheme of things, these counts are minor crimes. All it takes is one intractable MAGA on the jury who thinks this is a Deep State conspiracy, or that Stormy Daniels is some vindictive gold-digger, and Trump will skate. Even so, a former POTUS is a criminal defendant. Let’s pause for a moment and—to use a phrase I abhor that was ubiquitous on Twitter seven years ago—let that sink in. None of the other 43 previous presidents (Grover Cleveland was 22 and 24) were indicted for even a single crime, Ulysses Grant’s need for speed notwithstanding. Nixon likely would have been but was pre-emptively pardoned, so we’ll never know. A FPOTUS indictment, therefore, is unprecedented. And this is just the first of Trump’s criminal trials. There are three more pending. Not one, not two, but three: four, altogether. Four! That doesn’t even take into account the civil fraud case, where the State of New York is poised to seize almost half a billion dollars in assets from Trump pending appeal—and that assumes that the bond he secured winds up being legit…”

See also Axios: New York Courts to release daily transcripts from Trump hush money trial



Yes and no. Some things change, some remain the same.

https://www.axios.com/2024/04/16/ai-top-secret-intelligence

"Top secret" is no longer the key to good intel in an AI world: report

… Today's intelligence systems cannot keep pace with the explosion of data now available, requiring "rapid" adoption of generative AI to keep an intelligence advantage over rival powers.

  • The U.S. intelligence community "risks surprise, intelligence failure, and even an attrition of its importance" unless it embraces AI's capacity to process floods of data, according to the report from the Special Competitive Studies Project.

  • The federal government needs to think more in terms of "national competitiveness" than "national security," given the wider range of technologies now used to attack U.S. interests.



Something I have been meaning to try. Could this turn Shakespeare into a graphic novel?

https://www.makeuseof.com/best-open-source-ai-image-generators/

The 5 Best Open-Source AI Image Generators

AI-based text-to-image generation models are everywhere and becoming easier to access daily. While it's easy just to visit a website and generate the image you're looking for, open-source text-to-image generators are your best bet if you want more control over the generation process.



 

Sunday, April 21, 2024

Long term implications? Pollution of the LLM corpus.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4771884

Do large language models have a legal duty to tell the truth?

Careless speech is a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education, and the development of shared social truths in democratic societies. LLMs produce responses that are plausible, helpful, and confident but that contain factual inaccuracies, inaccurate summaries, misleading references, and biased information. These subtle mistruths are poised to cause a severe cumulative degradation and homogenisation of knowledge over time. This article examines the existence and feasibility of a legal duty for LLM providers to create models that “tell the truth.” We argue that LLM providers should be required to mitigate careless speech and better align their models with truth through open, democratic processes. Careless speech is defined and contrasted with the simplified concept of “ground truth” in LLMs and prior discussion of related truth-related risks in LLMs including hallucinations, misinformation, and disinformation. The existence of truth-related obligations in EU law is then assessed, focusing on human rights law and liability frameworks for products and platforms. Current frameworks generally contain relatively limited, sector-specific truth duties. The article concludes by proposing a pathway to create a legal truth duty applicable to providers of both narrow- and general-purpose LLMs.





Law firms will use AI. How will they prepare?

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4794225

Leveraging The Use of Artificial Intelligence In Legal Practice

The integration of Artificial Intelligence (AI) into legal practice has revolutionized the legal landscape, offering unprecedented opportunities for efficiency and accuracy. By embracing AI technologies and adapting to the evolving legal landscape, legal professionals can enhance efficiency, accuracy, and client satisfaction, ultimately shaping the future of the legal profession. However, the adoption of AI in legal practice also presents challenges, including ethical considerations, data privacy concerns, and the need for specialized training. As legal professionals embrace AI technologies, it becomes imperative to address these challenges proactively and ensure responsible and ethical use. This presentation explores the diverse applications of AI in legal practice and its implications for the legal profession.



Saturday, April 20, 2024

Do we need a chapter here?

https://www.geekwire.com/2024/seattle-tech-vet-calls-rapidly-growing-ai-tinkerers-meetups-the-new-homebrew-computer-club-for-ai/

Seattle tech vet calls rapidly growing ‘AI Tinkerers’ meetups the new ‘Homebrew Computer Club’ for AI

A first meetup in Seattle in November 2022 attracted 12 people. A second in Austin was led by GitHub Copilot creator Alex Graveley, who came up with the name “AI Tinkerers.”

Nearly a year-and-a-half later, Heitzeberg said the idea has taken off and is going global. In a LinkedIn post last week, he said eight cities — from Seattle to Chicago to Boston to Medellin, Colombia, and elsewhere — have AI Tinkerers meetups planned over the next month.

We are kind of the Homebrew Computer Club of AI,” Heitzeberg said, referencing the famed hobbyist group that gathered in Silicon Valley in the mid-1970s to mid-1980s and attracted the likes of Apple founders Steve Jobs and Steve Wozniak. “It was people trying stuff. It’s that for AI, and it’s really needed and really good for innovation.”



Friday, April 19, 2024

I worry that “force” might eventually include beating a password out of me.

https://www.bespacific.com/cops-can-force-suspect-to-unlock-phone-with-thumbprint-us-court-rules/

Cops can force suspect to unlock phone with thumbprint, US court rules

Ars Technica: “The US Constitution’s Fifth Amendment protection against self-incrimination does not prohibit police officers from forcing a suspect to unlock a phone with a thumbprint scan, a federal appeals court ruled yesterday. The ruling does not apply to all cases in which biometrics are used to unlock an electronic device but is a significant decision in an unsettled area of the law. The US Court of Appeals for the 9th Circuit had to grapple with the question of “whether the compelled use of Payne’s thumb to unlock his phone was testimonial,” the ruling in United States v. Jeremy Travis Payne said. “To date, neither the Supreme Court nor any of our sister circuits have addressed whether the compelled use of a biometric to unlock an electronic device is testimonial.” A three-judge panel at the 9th Circuit ruled unanimously against Payne, affirming a US District Court’s denial of Payne’s motion to suppress evidence. Payne was a California parolee who was arrested by California Highway Patrol (CHP) after a 2021 traffic stop and charged with possession with intent to distribute fentanyl, fluorofentanyl, and cocaine. There was a dispute in District Court over whether a CHP officer “forcibly used Payne’s thumb to unlock the phone.” But for the purposes of Payne’s appeal, the government “accepted the defendant’s version of the facts, i.e., ‘that defendant’s thumbprint was compelled.'” Payne’s Fifth Amendment claim “rests entirely on whether the use of his thumb implicitly related certain facts to officers such that he can avail himself of the privilege against self-incrimination,” the ruling said. Judges rejected his claim, holding “that the compelled use of Payne’s thumb to unlock his phone (which he had already identified for the officers) required no cognitive exertion, placing it firmly in the same category as a blood draw or fingerprint taken at booking.” “When Officer Coddington used Payne’s thumb to unlock his phone—which he could have accomplished even if Payne had been unconscious—he did not intrude on the contents of Payne’s mind,” the court also said…”





Perspective. Worth an hour of your time.

https://www.nationalreview.com/corner/the-rise-of-the-machines-john-etchemendy-and-fei-fei-li-on-our-ai-future/

The Rise of The Machines: John Etchemendy and Fei-Fei Li on Our AI Future

John Etchemendy and Fei-Fei Li are the co-directors of Stanford’s Institute for Human-Centered Artificial Intelligence (HAI), founded in 2019 to “advance AI research, education, policy and practice to improve the human condition.” In this interview, they delve into the origins of the technology, its promise, and its potential threats. They also discuss what AI should be used for, where it should not be deployed, and why we as a society should — cautiously — embrace it.





Interesting story of an unlevel playing field.

https://lawrencekstimes.com/2024/04/18/lhs-journalists-dispute-gaggle/

Lawrence journalism students convince district to reverse course on AI surveillance they say violates freedom of press

Journalism students at Lawrence High School have convinced the school district to remove their files from the purview of a controversial artificial intelligence surveillance system after months of debate with administrators.

The AI software, called Gaggle, sifts through anything connected to the district’s Google Workspace — which includes Gmail, Drive and other products — and flags content it deems a safety risk, such as allusions to self-harm, depression, drug use and violence.