你將很快有一位機器人同事
?
微軟創(chuàng)始人比爾·蓋茨最近表示,設計用于在職場中取代人類的機器人應該納稅,。雖然蓋茨的說法得到的評論褒貶不一,,但卻煽動起了一種錯誤的觀點,即人類需要擔心機器人搶走他們的工作,。 美國財政部長史蒂芬·姆努欽認為,,雖然現(xiàn)在執(zhí)行機器人稅為時尚早,但在未來50至100年內(nèi),,這或許將成為現(xiàn)實,。在開始談論人工智能(AI)資本化和向機器人征稅之前,我們首先需要調查,、分析和解決有哪些嚴重的問題,,會妨礙機器人有效服務于普通消費者和應用于職場。 在未來五年內(nèi),,機器人能夠執(zhí)行的任務,,將對一些傳統(tǒng)的人類工種產(chǎn)生不可逆的重要影響。但首先,,負責各種人工智能設計與編程的人員,,需要確保他們設計的線路能夠保證機器人更多帶來的是好處,而不是傷害,。 如果部門或辦公室選擇使用機器人(使人類受益)處理行政類,、涉及大量數(shù)據(jù)的任務,那么保留管理等人類元素,,對于它們的成功會有多大的重要性,,我們?nèi)杂写^察,。但可以肯定的是,在完全自動化的環(huán)境中,,機器人做出的廣泛決策與行為,,應該符合參與工作相關接觸的人類的最佳利益,而這需要機器人具備更高的人性水平,。簡而言之,,人類必須為人工智能和機器人制定勞工標準和培訓程序,以填補機器人認知中的道德空白,。 使人工智能和機器人可以自主決策,,是技術人員和設計師們面臨的最棘手的問題之一。用正確的數(shù)據(jù)培訓機器人,,使它們可以進行正確的計算和決策,,這是工程師的職業(yè)責任。在合規(guī)與治理領域尤其可能出現(xiàn)復雜的挑戰(zhàn),。 人類需要接受合規(guī)培訓,,以了解績效標準和人事部門的期望。同樣,,我們需要在一個互補的合規(guī)框架內(nèi)設計機器人和人工智能,,以管理它們在職場中與人類的交互。這意味著制定通用的政策,,涵蓋人類勞動力的平等機會與多元化等重要方面,,強制執(zhí)行反腐敗法律,控制各種形式的欺詐活動,。最終,,我們需要以希望人類達到的職業(yè)標準為藍本,制定機器人行為規(guī)范,。與此同時,,設計師們還需要為機器人留出空間,為自己的錯誤承擔責任,,從中總結經(jīng)驗教訓并最終實現(xiàn)自我糾正,。 人工智能和機器人都需要接受培訓,學會如何在不計其數(shù)的職場狀況下做出正確的決定,。培訓的一種方式是創(chuàng)建一個基于獎勵的學習系統(tǒng),,激勵機器人和人工智能實現(xiàn)高生產(chǎn)率。理想情況下,,這種由工程師精心創(chuàng)造的系統(tǒng),,將使機器人從收到第一次獎勵開始,“想要”超出預期。 按照當前的“強化學習”系統(tǒng),,一個人工智能或機器人根據(jù)其特定行動所產(chǎn)生的結果,,會收到正面或負面的反饋。只要我們能夠為個別機器人設計獎勵,,就能將這種反饋方法推而廣之,,確保綜合的機器人網(wǎng)絡能夠高效率運行,根據(jù)不同反饋做出調整,,并基本保持機器人的良好行為,。在實踐中,設計獎勵時不僅要根據(jù)人工智能或機器人為實現(xiàn)一個結果所采取的行為,,還應該考慮人工智能與機器人實現(xiàn)特定結果的做法是否符合人類的價值觀,。 但在考慮對機器人和人工智能征稅之前,,我們必須正確了解自我學習技術的基本知識,,制定出綜合的道德標準,并長期堅持執(zhí)行,。機器人制造企業(yè)首先需要保證,,他們創(chuàng)造的人工智能通過不斷學習和完善,最終能夠符合人類的道德倫理,,具備適應能力,,可以對自己的行為負責,之后才能讓它們在一些傳統(tǒng)工作領域取代人類,。我們的責任是使人工智能可以顯著改進人類從事的工作,。否則,我們只是在重復錯誤,,用目的不明確的機器人取代了人類的工作,。(財富中文網(wǎng)) 本文作者克里特·沙瑪為Sage Group的機器人與人工智能副總裁。?? 譯者:劉進龍/汪皓 |
Microsoft founder Bill Gates recently suggested that robots primed to replace humans in the workplace should be taxed. While Gates’s proposal received a mixed reception, it mainly served to stoke an erroneous narrative that humans need to fear robots stealing their jobs. The whole idea of implementing a robot tax is premature, though not quite 50 to 100 years in the future, as Treasury Secretary Steven Mnuchin believes. Before we can start talking about the capitalization of artificial intelligence (AI) and taxing robots, we need to investigate, decipher, and tackle serious challenges in the way of making robots work effectively for the general consumer and in the workplace. Robots will be able to perform tasks that significantly impact the traditionally human workforce in irreversible ways within the next five years. But first, people who build and program all forms of AI need to ensure their wiring prevents robots from causing more harm than good. It remains to be seen how important maintaining a human element—managerial or otherwise—will be to the success of departments and offices that choose to employ robots (in favor of people) to perform administrative, data-rich tasks. Certainly, though, a superior level of humanity will be required to make wide-ranging decisions and consistently act in the best interest of actual humans involved in work-related encounters in fully automated environments. In short, humans will need to establish workforce standards and build training programs for AI and robots geared toward filling ethical gaps in robotic cognition. Enabling AI and robots to make autonomous decisions is one of the trickiest areas for technologists and builders to navigate. Engineers have an occupational responsibility to train robots with the right data in order for them to make the right calculations and come to the right decisions. Particularly complex challenges could arise in the areas of compliance and governance. Humans need to go through compliance training in order to understand performance standards and personnel expectations. Similarly, we need to design robots and AI with a complementary compliance framework to govern their interactions with humans in the workplace. That would mean creating universal policies covering the importance of equal opportunity and diversity among the human workforce, enforcing anti-bribery laws, and curbing all forms of fraudulent activity. Ultimately, we need to create a code of conduct for robots that mirrors the professional standards we expect from people. To accomplish this, builders will need to leave room for robots to be accountable for, learn from, and eventually self-correct their own mistakes. AI and robots will need to be trained to make the right decisions in a countless number of workplace situations. One way to do this would be to create a rewards-based learning system that motivates robots and AI to achieve high levels of productivity. Ideally, the engineer-crafted system would make bots “want to” exceed expectations from the moment they receive their first reward. Under the current “reinforcement learning” system, a single AI or robot receives positive or negative feedback depending on the outcome generated when it takes a certain action. If we can construct rewards for individual robots, it is possible to use this feedback approach at scale to ensure that the combined network of robots operates efficiently, adjusts based on a diverse set of feedback, and remains generally well-behaved. In practice, rewards should be built not just based on what AI or robots do to achieve an outcome, but also on how AI and robots align with human values to accomplish that particular result. But before we think about taxing robots and AI, we need to get the basics of the self-learning technology right, and develop comprehensive ethical standards that hold up for the long term. Builders need to ensure that the AI they are creating has the ability to learn and improve in order to be ethical, adaptable, and accountable prior to replacing traditionally human-held jobs. Our responsibility is to make AI that significantly improves upon work humans do. Otherwise, we will end up replicating mistakes and replacing human-held jobs with robots that have ill-defined purpose. Kriti Sharma is the vice president of bots and AI at Sage Group. |