Sunʼiy intellekt

Vikipediya, ochiq ensiklopediya
Sunʼiy ong ilovalarini qoʻllash inson ongini kundan-kunga rivojlantirish yoʻlida bosqichma-bosqich rivojlanmoqda.

Sunʼiy ong, sunʼiy intellekt yoki sunʼiy idrok (inglizcha: Artificial intelligence; odatda, AI sifatida ham qisqartiriladi) — insonlar yoki hayvonlar tomonidan koʻrsatiladigan tabiiy ongdan farqli oʻlaroq, mashinalar tomonidan koʻrsatiladigan ongdir. Yetakchi sunʼiy ong darslik kitoblari bu sohani „ongli agentlar“ni oʻrganish deya taʼriflaydi: oʻz muhitini fahmlaydigan va maqsadlariga muvaffaqiyatli erishish imkoniyatini maksimal darajada oshiradigan amallarni amalga oshiruvchi har qanday sistema. Xalq orasida „sunʼiy ong“ atamasi koʻpincha „oʻrganish“ va „muammolarni yechish“ kabi inson idroki bilan bogʻlaydigan „kognitiv“ funksiyalarni taqlid qiladigan mashinalarni tasvirlashda ishlatiladi, biroq bu taʼrifni yirik sunʼiy ong tadqiqotchilari rad etishadi.

Sunʼiy ong ilovalari yetuk web-qidiruv tizimlari (masalan, Google), tavsiya etuvchi tizimlar (bundan YouTube, Amazon va Netflix foydalanadi), inson nutqini anglash (masalan, Siri yoki Alexa), oʻziyurar mashinalar (masalan, Tesla) hamda strategik oʻyin tizimlarida (masalan, shaxmat va Go) yuqori darajada raqobatlashishni oʻz ichiga oladi.[1] Mashinalar tobora koʻp qobiliyatlarga ega boʻlib borishar ekan, „ong“ talab etuvchi vazifalar koʻpincha sunʼiy ong effekti deb ataluvchi fenomen boʻlgan sunʼiy ong taʼrifidan olib tashlanadi.[2]

Sunʼiy ong kelajagi[tahrir | manbasini tahrirlash]

Super ong[tahrir | manbasini tahrirlash]

Super ong, giperong yoki super-inson ongi — bu eng iqtidorli insonnikidan ancha yuqori idrokka ega boʻlgan gipotetik agentdir. Super ong, shuningdek, bunday agentga ega boʻlgan intellekt shakli yoki darajasiga ishora qilishi ham mumkindir.[3]

Texnologik singulyarlik[tahrir | manbasini tahrirlash]

Agar sunʼiy umumiy ong boʻyicha tadqiqotlar yetarli darajada idrokka ega boʻlgan dasturiy taʼminot ishlab chiqara olsa, bunda u oʻzini qayta dasturlashi hamda yaxshilashi mumkin boʻladi.

Manbalar[tahrir | manbasini tahrirlash]

  1. Google 2016.
  2. McCorduck 2004, s. 204.
  3. Roberts 2016.

Qoʻshimcha adabiyotlar[tahrir | manbasini tahrirlash]

  • DH Author, „Why Are There Still So Many Jobs? The History and Future of Workplace Automation“ (2015) 29(3) Journal of Economic Perspectives 3.
  • Boden, Margaret, Mind As Machine, en:Oxford University Press, 2006.
  • Cukier, Kenneth, „Ready for Robots? How to Think about the Future of AI“, en:Foreign Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–98. George Dyson, historian of computing, writes (in what might be called „Dysonʼs Law“) that „Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand.“ (s. 197.) Computer scientist en:Alex Pentland writes: „Current AI machine-learning en:algorithms are, at their core, dead simple stupid. They work, but they work by brute force.“ (s. 198.)
  • Domingos, Pedro, „Our Digital Doubles: AI will serve our species, not control it“, en:Scientific American, vol. 319, no. 3 (2018-yil sentabr), s. 88-93.
  • Gopnik, Alison, „Making AI More Human: Artificial intelligence has staged a revival by starting to incorporate what we know about how children learn“, en:Scientific American, vol. 316, no. 6 (2017-yil iyun), s. 60-65.
  • Johnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT Press.
  • Koch, Christof, „Proust among the Machines“, en:Scientific American, vol. 321, no. 6 (2019-yil dekabr), s. 46-49. en:Christof Koch doubts the possibility of „intelligent“ machines attaining en:consciousness, because „[e]ven the most sophisticated en:brain simulations are unlikely to produce conscious en:feelings.“ (s. 48.) According to Koch, „Whether machines can become sentient [is important] for ethical reasons. If computers experience life through their own senses, they cease to be purely a means to an end determined by their usefulness to… humans. Per GNW [the Global Neuronal Workspace theory], they turn from mere objects into subjects… with a point of view…. Once computers' en:cognitive abilities rival those of humanity, their impulse to push for legal and political rights will become irresistible—the right not to be deleted, not to have their memories wiped clean, not to suffer pain and degradation. The alternative, embodied by IIT [Integrated Information Theory], is that computers will remain only supersophisticated machinery, ghostlike empty shells, devoid of what we value most: the feeling of life itself.“ (s. 49.)
  • Marcus, Gary, „Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind“, en:Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63. A stumbling block to AI has been an incapacity for reliable en:disambiguation. An example is the „pronoun disambiguation problem“: a machine has no way of determining to whom or what a en:pronoun in a sentence refers. (s. 61.)
  • E McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2018) SSRN, part 2(3) Wayback Machine saytida arxivlandi (24-may 2018-yil)..
  • en:George Musser, „en:Artificial Imagination: How machines could learn en:creativity and en:common sense, among other human qualities“, en:Scientific American, vol. 320, no. 5 (May 2019), pp. 58–63.
  • Myers, Courtney Boyd ed. (2009). „The AI Report“ Wayback Machine saytida arxivlandi (29-iyul 2017-yil).. Forbes 2009-yil iyun
  • Raphael, Bertram. The Thinking Computer. W.H. Freeman and Co., 1976. ISBN 978-0716707233. 
  • Scharre, Paul, „Killer Apps: The Real Dangers of an AI Arms Race“, en:Foreign Affairs, vol. 98, no. 3 (May/June 2019), pp. 135–44. „Today’s AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater.“ (s. 140.)
  • Serenko, Alexander (2010). „The development of an AI journal ranking based on the revealed preference approach“ (PDF). Journal of Informetrics. 4-jild, № 4. 447–59-bet. doi:10.1016/j.joi.2010.04.001. 4–oktabr 2013–yilda asl nusxadan arxivlandi (PDF). Qaraldi: 24–avgust 2013–yil.{{cite magazine}}: CS1 maint: date format ()
  • Serenko, Alexander; Michael Dohan (2011). „Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence“ (PDF). Journal of Informetrics. 5-jild, № 4. 629–49-bet. doi:10.1016/j.joi.2011.06.002. 4–oktabr 2013–yilda asl nusxadan arxivlandi (PDF). Qaraldi: 12–sentabr 2013–yil.{{cite magazine}}: CS1 maint: date format ()
  • Tom Simonite. „2014 in Computing: Breakthroughs in Artificial Intelligence“. MIT Technology Review (29-dekabr 2014-yil).

  • Sun, R. & Bookman, L. (eds.), Computational Architectures: Integrating Neural and Symbolic Processes. Kluwer Academic Publishers, Needham, MA. 1994.
  • Taylor, Paul, „Insanely Complicated, Hopelessly Inadequate“ (review of en:Brian Cantwell Smith, The Promise of Artificial Intelligence: Reckoning and Judgment, MIT, 2019, ISBN 978-0262043045, 157 pp.; en:Gary Marcus and Ernest Davis, Rebooting AI: Building Artificial Intelligence We Can Trust, Ballantine, 2019, ISBN 978-1524748258, 304 pp.; en:Judea Pearl and Dana Mackenzie, The Book of Why: The New Science of Cause and Effect, Penguin, 2019, ISBN 978-0141982410, 418 pp.), en:London Review of Books, vol. 43, no. 2 (21-yanvar 2021-yil), pp. 37–39. Paul Taylor writes (s. 39): „Perhaps there is a limit to what a computer can do without knowing that it is manipulating imperfect representations of an external reality.“
  • Tooze, Adam, „Democracy and Its Discontents“, en:The New York Review of Books, vol. LXVI, no. 10 (6-iyun 2019-yil), pp. 52–53, 56-57. „Democracy has no clear answer for the mindless operation of bureaucratic and technological power. We may indeed be witnessing its extension in the form of artificial intelligence and robotics. Likewise, after decades of dire warning, the [environmental problem remains fundamentally unaddressed…. Bureaucratic overreach and environmental catastrophe are precisely the kinds of slow-moving existential challenges that democracies deal with very badly…. Finally, there is the threat du jour: corporations and the technologies they promote.“ (s. 56-57.)

Havolalar[tahrir | manbasini tahrirlash]

  • Sunʼiy ong. Internet Encyclopedia of Philosophy. (ingl.)
  • Thomason, Richmond. „Logic and Artificial Intelligence“. In Zalta, Edward N. (nash.). Stanford Encyclopedia of Philosophy.
  • Sunʼiy ong, BBC Radio — John Agar, Alison Adam & Igor Aleksander (In Our Time, 8-dekabr 2005-yil) (ingl.)