亚色在线观看_亚洲人成a片高清在线观看不卡_亚洲中文无码亚洲人成频_免费在线黄片,69精品视频九九精品视频,美女大黄三级,人人干人人g,全新av网站每日更新播放,亚洲三及片,wwww无码视频,亚洲中文字幕无码一区在线

首頁 500強(qiáng) 活動 榜單 商業(yè) 科技 商潮 專題 品牌中心
雜志訂閱

男子在庭審中使用AI虛擬形象陳述案情,,被法官叫停

美聯(lián)社
2025-04-09

人工智能滲透法律界所引發(fā)的尷尬故事再添新篇章。

文本設(shè)置
小號
默認(rèn)
大號
Plus(0條)

紐約一上訴法院的法官們在幾秒鐘后就意識到,,在一起訴訟中,,通過視頻屏幕要向他們陳述案情的男子,不僅沒有法律學(xué)位,,甚至根本不存在,。

人工智能滲透法律界所引發(fā)的尷尬故事再添新篇章。3月26日,,在紐約州最高法院上訴庭第一司法部鑲嵌彩色玻璃的穹頂下,,一起職場糾紛案的原告杰羅姆·德瓦爾德準(zhǔn)備向法官陳述案情時,上演了這出鬧劇,。

薩莉·馬扎內(nèi)特-丹尼爾斯法官宣布:“上訴人已提交視頻作為陳述材料,。好的,,我們現(xiàn)在就播放這段視頻?!?

視頻屏幕上出現(xiàn)一位面帶微笑的年輕男子,,發(fā)型經(jīng)過精心打理,身著扣角領(lǐng)襯衫和毛衣,。

這名男子在視頻開頭說道:“敬請法庭垂鑒,。我今日以謙卑的自訴人身份,向五位尊貴的法官做法律陳述,?!?/p>

馬扎內(nèi)特-丹尼爾斯法官打斷道:“請稍等。這是本案代理律師嗎,?”

德瓦爾德回應(yīng)稱:“這是我制作的,,并非真人?!?/p>

事實(shí)上,,這是人工智能生成的虛擬形象。法官顯然不悅,。

馬扎內(nèi)特-丹尼爾斯法官說道:“您在申請時本應(yīng)說明此事,。但你并沒有告訴我們,先生,?!闭f罷,她立即喝令關(guān)閉視頻,。

她表示:“我無法容忍被誤導(dǎo),。”然后她才允許德瓦爾德繼續(xù)陳述,。

德瓦爾德事后向法院致歉,,申明他并無惡意。由于他沒有聘請代理律師,,因此不得不自行完成法律陳述,。他原以為虛擬形象能避免自己的言語含糊與卡頓,順利完成陳述,。

在接受美聯(lián)社專訪時,,德瓦爾德透露,他事先向法院申請了播放預(yù)錄視頻,,隨后使用舊金山某科技公司開發(fā)的產(chǎn)品創(chuàng)建虛擬形象,。最初,他曾嘗試制作自己的數(shù)字分身,,但開庭前未能完成,。

德瓦爾德坦言:“法庭對此確實(shí)非常不滿,。法官們嚴(yán)厲呵斥了我?!?/p>

即便執(zhí)業(yè)律師應(yīng)用人工智能出現(xiàn)問題時也會遇到麻煩,。

2023年6月,紐約聯(lián)邦法官對兩名律師和一家律所各處以5,000美元罰款,,原因是他們使用人工智能進(jìn)行法律檢索,,結(jié)果引用了聊天機(jī)器人編造的虛構(gòu)案例。涉事律所稱這是“善意失誤”,,因?yàn)樗麄儧]有意識到人工智能會編造事實(shí),。

同年末,在特朗普前私人律師邁克爾·科恩的代理律師提交的法律文書中,,再次出現(xiàn)人工智能虛構(gòu)的判例,。科恩承擔(dān)全責(zé),,并表示他未預(yù)料到自己使用的谷歌(Google)法律檢索工具存在所謂的“AI幻覺”問題,。

盡管AI存在諸多錯誤,亞利桑那州最高法院上月卻有意啟用了兩個AI生成的虛擬形象(與德瓦爾德所用的技術(shù)類似),,用于向公眾總結(jié)裁判文書,。

法院官網(wǎng)上的“丹尼爾”與“維多利亞”宣稱其職責(zé)是“傳播法院資訊”。

威廉與瑪麗法學(xué)院(William & Mary Law School)法律與法院技術(shù)中心副教授兼研究部助理主任丹尼爾·申表示,,他對德瓦爾德在紐約法院上訴案件中使用虛假形象進(jìn)行陳述并不感到意外,。

他表示:“在我看來,這種情況是不可避免的,?!?/p>

他指出囿于傳統(tǒng)與法院的規(guī)則以及被取消律師執(zhí)業(yè)資格的風(fēng)險,律師不可能這樣做,。但沒有聘請律師并且申請準(zhǔn)許在法院進(jìn)行陳述的自訴人,通常缺乏對使用合成視頻陳述案件的風(fēng)險提示,。

德瓦爾德表示,,他試圖關(guān)注技術(shù)前沿,近期剛聽了美國律師協(xié)會(American Bar Association)主辦的一場在線研討會,,討論的主題是AI在法律界的應(yīng)用,。

截至上周四,德瓦爾德的案件仍在紐約上訴法院審理中,。(財富中文網(wǎng))

譯者:劉進(jìn)龍

審校:汪皓

紐約一上訴法院的法官們在幾秒鐘后就意識到,,在一起訴訟中,通過視頻屏幕要向他們陳述案情的男子,,不僅沒有法律學(xué)位,,甚至根本不存在,。

人工智能滲透法律界所引發(fā)的尷尬故事再添新篇章。3月26日,,在紐約州最高法院上訴庭第一司法部鑲嵌彩色玻璃的穹頂下,,一起職場糾紛案的原告杰羅姆·德瓦爾德準(zhǔn)備向法官陳述案情時,上演了這出鬧劇,。

薩莉·馬扎內(nèi)特-丹尼爾斯法官宣布:“上訴人已提交視頻作為陳述材料,。好的,我們現(xiàn)在就播放這段視頻,?!?

視頻屏幕上出現(xiàn)一位面帶微笑的年輕男子,發(fā)型經(jīng)過精心打理,,身著扣角領(lǐng)襯衫和毛衣,。

這名男子在視頻開頭說道:“敬請法庭垂鑒。我今日以謙卑的自訴人身份,,向五位尊貴的法官做法律陳述,。”

馬扎內(nèi)特-丹尼爾斯法官打斷道:“請稍等,。這是本案代理律師嗎,?”

德瓦爾德回應(yīng)稱:“這是我制作的,并非真人,?!?/p>

事實(shí)上,這是人工智能生成的虛擬形象,。法官顯然不悅,。

馬扎內(nèi)特-丹尼爾斯法官說道:“您在申請時本應(yīng)說明此事。但你并沒有告訴我們,,先生,。”說罷,,她立即喝令關(guān)閉視頻,。

她表示:“我無法容忍被誤導(dǎo)?!比缓笏旁试S德瓦爾德繼續(xù)陳述,。

德瓦爾德事后向法院致歉,申明他并無惡意,。由于他沒有聘請代理律師,,因此不得不自行完成法律陳述。他原以為虛擬形象能避免自己的言語含糊與卡頓,順利完成陳述,。

在接受美聯(lián)社專訪時,,德瓦爾德透露,他事先向法院申請了播放預(yù)錄視頻,,隨后使用舊金山某科技公司開發(fā)的產(chǎn)品創(chuàng)建虛擬形象,。最初,他曾嘗試制作自己的數(shù)字分身,,但開庭前未能完成,。

德瓦爾德坦言:“法庭對此確實(shí)非常不滿。法官們嚴(yán)厲呵斥了我,?!?/p>

即便執(zhí)業(yè)律師應(yīng)用人工智能出現(xiàn)問題時也會遇到麻煩。

2023年6月,,紐約聯(lián)邦法官對兩名律師和一家律所各處以5,000美元罰款,,原因是他們使用人工智能進(jìn)行法律檢索,結(jié)果引用了聊天機(jī)器人編造的虛構(gòu)案例,。涉事律所稱這是“善意失誤”,,因?yàn)樗麄儧]有意識到人工智能會編造事實(shí)。

同年末,,在特朗普前私人律師邁克爾·科恩的代理律師提交的法律文書中,,再次出現(xiàn)人工智能虛構(gòu)的判例??贫鞒袚?dān)全責(zé),,并表示他未預(yù)料到自己使用的谷歌(Google)法律檢索工具存在所謂的“AI幻覺”問題。

盡管AI存在諸多錯誤,,亞利桑那州最高法院上月卻有意啟用了兩個AI生成的虛擬形象(與德瓦爾德所用的技術(shù)類似),,用于向公眾總結(jié)裁判文書。

法院官網(wǎng)上的“丹尼爾”與“維多利亞”宣稱其職責(zé)是“傳播法院資訊”,。

威廉與瑪麗法學(xué)院(William & Mary Law School)法律與法院技術(shù)中心副教授兼研究部助理主任丹尼爾·申表示,,他對德瓦爾德在紐約法院上訴案件中使用虛假形象進(jìn)行陳述并不感到意外。

他表示:“在我看來,,這種情況是不可避免的,。”

他指出囿于傳統(tǒng)與法院的規(guī)則以及被取消律師執(zhí)業(yè)資格的風(fēng)險,,律師不可能這樣做。但沒有聘請律師并且申請準(zhǔn)許在法院進(jìn)行陳述的自訴人,,通常缺乏對使用合成視頻陳述案件的風(fēng)險提示,。

德瓦爾德表示,他試圖關(guān)注技術(shù)前沿,近期剛聽了美國律師協(xié)會(American Bar Association)主辦的一場在線研討會,,討論的主題是AI在法律界的應(yīng)用,。

截至上周四,德瓦爾德的案件仍在紐約上訴法院審理中,。(財富中文網(wǎng))

譯者:劉進(jìn)龍

審校:汪皓

NEW YORK (AP) — It took only seconds for the judges on a New York appeals court to realize that the man addressing them from a video screen — a person about to present an argument in a lawsuit — not only had no law degree, but didn’t exist at all.

The latest bizarre chapter in the awkward arrival of artificial intelligence in the legal world unfolded March 26 under the stained-glass dome of New York State Supreme Court Appellate Division’s First Judicial Department, where a panel of judges was set to hear from Jerome Dewald, a plaintiff in an employment dispute.

“The appellant has submitted a video for his argument,” said Justice Sallie Manzanet-Daniels. “Ok. We will hear that video now.”

On the video screen appeared a smiling, youthful-looking man with a sculpted hairdo, button-down shirt and sweater.

“May it please the court,” the man began. “I come here today a humble pro se before a panel of five distinguished justices.”

“Ok, hold on,” Manzanet-Daniels said. “Is that counsel for the case?”

“I generated that. That’s not a real person,” Dewald answered.

It was, in fact, an avatar generated by artificial intelligence. The judge was not pleased.

“It would have been nice to know that when you made your application. You did not tell me that sir,” Manzanet-Daniels said before yelling across the room for the video to be shut off.

“I don’t appreciate being misled,” she said before letting Dewald continue with his argument.

Dewald later penned an apology to the court, saying he hadn’t intended any harm. He didn’t have a lawyer representing him in the lawsuit, so he had to present his legal arguments himself. And he felt the avatar would be able to deliver the presentation without his own usual mumbling, stumbling and tripping over words.

In an interview with The Associated Press, Dewald said he applied to the court for permission to play a prerecorded video, then used a product created by a San Francisco tech company to create the avatar. Originally, he tried to generate a digital replica that looked like him, but he was unable to accomplish that before the hearing.

“The court was really upset about it,” Dewald conceded. “They chewed me up pretty good.”

Even real lawyers have gotten into trouble when their use of artificial intelligence went awry.

In June 2023, two attorneys and a law firm were each fined $5,000 by a federal judge in New York after they used an AI tool to do legal research, and as a result wound up citing fictitious legal cases made up by the chatbot. The firm involved said it had made a “good faith mistake” in failing to understand that artificial intelligence might make things up.

Later that year, more fictious court rulings invented by AI were cited in legal papers filed by lawyers for Michael Cohen, a former personal lawyer for President Donald Trump. Cohen took the blame, saying he didn’t realize that the Google tool he was using for legal research was also capable of so-called AI hallucinations.

Those were errors, but Arizona’s Supreme Court last month intentionally began using two AI-generated avatars, similar to the one that Dewald used in New York, to summarize court rulings for the public.

On the court’s website, the avatars — who go by “Daniel” and “Victoria” — say they are there “to share its news.”

Daniel Shin, an adjunct professor and assistant director of research at the Center for Legal and Court Technology at William & Mary Law School, said he wasn’t surprised to learn of Dewald’s introduction of a fake person to argue an appeals case in a New York court.

“From my perspective, it was inevitable,” he said.

He said it was unlikely that a lawyer would do such a thing because of tradition and court rules and because they could be disbarred. But he said individuals who appear without a lawyer and request permission to address the court are usually not given instructions about the risks of using a synthetically produced video to present their case.

Dewald said he tries to keep up with technology, having recently listened to a webinar sponsored by the American Bar Association that discussed the use of AI in the legal world.

As for Dewald’s case, it was still pending before the appeals court as of Thursday.

財富中文網(wǎng)所刊載內(nèi)容之知識產(chǎn)權(quán)為財富媒體知識產(chǎn)權(quán)有限公司及/或相關(guān)權(quán)利人專屬所有或持有,。未經(jīng)許可,禁止進(jìn)行轉(zhuǎn)載,、摘編,、復(fù)制及建立鏡像等任何使用。
0條Plus
精彩評論
評論

撰寫或查看更多評論

請打開財富Plus APP

前往打開
熱讀文章