December 24, 2024

BBS Connect

Bridging You To The Outside World

BBS KJS Opinion: Is it Possible for AI to Obtain Self-Consciousness? (By BSP Scholars)

In the spirit of providing a platform to voice their ideas and showcase their writing skills, BBS KJS presents to you Opinion.

By: Felisha (JC2 Mendel)

The question, “Can Machines Think?” has hunted computer scientists and philosophers since 1950, dividing those who believe that consciousness is possible by following specific computations, and those who believe that AI may only mimic human communication rather than fully comprehend it.

To begin with, let us discuss the possibility of sentient AI. Christof Koch, the chief scientist of MindScope at Allen Institute, states that a computer’s thinking skills will quickly approach a human’s.

AI Ethics And The Quest For Self-Awareness In AI

He refers to computer games in their early times, which were clearly unrefined and risible, but following the innovation of deep neural networks and other technology advancements, online games are no longer just a part of one’s imagination.

Furthermore, he explores the possibility of obtaining consciousness. This may be achieved by the Global Neuronal Workspace (GNW) theory or the Integrated Information Theory (IIT). GNW was proposed in 1998, and is a widely accepted scientific theory of consciousness, which emphasizes the architectural features and functions of the brain in explaining consciousness. While IIT, established in 2004, accentuates the importance of the intrinsic casual powers of the brain, in which he states the definition of casual power as to how a present state is directly influenced by its past state, which in turn will influence its future state.

AI isn't close to becoming sentient – the real danger lies in how easily  we're prone to anthropomorphize it

On the contrary, David Hsing, who has 20 years of experience as a microprocessor circuit layout designer, based in California’s Silicon Valley, states that conscious machines are simply cherished fiction, explaining that hardware and software are unable to infuse conscious will. A machine can only pretend to understand what it’s doing and never genuinely ‘knows’ what it’s doing. This is supported by referring to the Chinese Room
(published by John Searle in 1980), which narrowly concludes that a programmed computer may seem to understand language, but does not produce understanding, and the Symbol Manipulator thought experiment, which states that programs simply manipulate symbols, and codes which contain no meaning.

Moreover, he asserts that machines lack comprehension since the mind is more than just a physical data processor. Machines are programmed, which deems them as an extension of their programmers’ will. Thus, it demonstrates the shallowness of today’s cutting-edge AI models, in which machines are designed to convert input into output in ways that humans choose to interpret as meaningful.

Despite the lack of consensus on whether or not sentient AI is possible, machine learning has been making huge strides in everything from predicting the stock market to playing chess at superhuman levels. It seems that only time can tell, since the real complication lies in the definition of consciousness, and whether it is as complicated as we think it is.

Note:

* The opinions expressed here are solely exclusive of the author(s) and due care has been exercised to avoid any form of plagiarism as much as possible.

* If you have an original write-up that you would like to share, why not be part of BBS Opinion? Send us your .docx file via email to kjs.connect@binabangsaschool.com (and do not forget photos!)

* Your writing must be engaging, sensitive, informative, and roughly 200 words or more. 

Click to rate this post!
[Total: 0 Average: 0]

You may have missed

1 min read
1 min read