Technology has traditionally evolved as the result ofhuman needs. Invention, when prized andrewarded, will invariably rise-up to meet the freemarket demands of society. It is in this realm thatArtificial Intelligence research and the resultantexpert systems have been forged. Much of the materialthat relates to the field of Artificial Intelligencedeals with human psychology and the nature ofconsciousness.
Exhaustive debate on consciousness andthe possibilities of consciousnessness in machines hasadequately, in my opinion, revealed that it is mostunlikely that we will ever converse or interract witha machine of artificial consciousness. In JohnSearle’s collection of lectures, Minds, Brains andScience, arguments centering around the mind-bodyproblem alone is sufficient to convince a reasonableperson that there is no way science will ever unravelthe mysteries of consciousness. Key to Searle’sanalysis of consciousness in the context of ArtificialIntelligence machines are refutations of strong andweak AI theses. Strong AI Theorists (SATs) believethat in the future, mankind will forge machines thatwill think as well as, if not better than humans. Tothem, pesent technology constrains this achievement. The Weak AI Theorists (WATs), almost converse to theSATs, believe that if a machine performs functionsthat resemble a human’s, then there mustbe a correlation between it and consciousness.Order now
Tothem, there is no technological impediment to thinkingmachines, because our most advanced machines alreadythink. It is important to review Searle’s refutationsof these respective theorists’ proposition toestablish a foundation (for the purpose of this essay)for discussing the applications of ArtificialIntelligence, both now and in the future. Strong AI Thesis Strong AI Thesis, according to Searle,can be described in four basic propositions. Proposition one categorizes human thought as theresult of computational processes. Given enoughcomputational power, memory, inputs, etc. , machineswill be able to think, if you believe thisproposition.
Proposition two, in essence, relegatesthe human mind to the software bin. Proponents of thisproposition believe that humans just happen to havebiological computers that run “wetware” as opposed tosoftware. Proposition three, the Turing proposition,holds that if a conscious being can be convinced that,through context-input manipulation, a machine isintelligent, then it is. proposition four is where theends will meet the means. It purports that when we areable to finally understand the brain, we will be ableto duplicate its functions.
Thus, if we replicate thecomputational power of the mind, we will thenunderstand it. Through argument and experimentation,Searle is able to refute or severely diminish these propositions. Searle argues that machines may wellbe able to “understand” syntax, but not thesemantics, or meaning communicated thereby. Essentially, he makes his point by citing the famous”Chinese Room Thought Experiment. ” It is here hedemonstrates that a computer” (a non-chinese speaker,a book of rules and the chinese symbols) can fool anative speaker, but have no idea what he is saying. Byproving that entities don’t have to understand whatthey are processing to appear as understanding refutesproposition one.
Proposition two is refuted by thesimple fact that there are no artificial minds ormind-like devices. Proposition two is thus a matter ofscience fiction rather than a plausible theory A goodchess program, like my (as yet undefeated) Chessmaster4000 Trubo refutes proposition three by passing aTuring test. It appears to be intelligent, but I knowit beats me through number crunching and symbolmanipulation. The Chessmaster 4000 example is also anadequate refutation of Professor Simon’s fourthproposition: “you can understand a process if you canreproduce it. ” Because the Software Toolworkscompany created a program for my computer thatsimulates the behavior of a grandmasterin the game, doesn’t mean that the computer is indeedintelligent. Weak AI Thesis There are five basic propositions thatfall in the Weak AI Thesis (WAT) camp.
The first ofthese states that the brain, due to its complexity ofoperation, must function something like a computer,the most sophisticated of human invention. The secondWAT propositionstates that if a machine’s output, ifit were compared to that of a human counterpartappeared to be the result ofintelligence, then the machine must be so. Propositionthreeconcerns itself with the similaritybetween how humans solve problems and howcomputers do so. By solving problemsbased on information gathered from their respectivesurroundings and memory and by obeyingrules of logic, it is proven that machines canindeed think.
The fourth WATproposition deals with the fact that brains are knownto havecomputational abilities and that aprogram therein can be inferred. Therefore, the mindisjust a big program (“wetware”). Thefifth and final WAT proposition states that, since themind appears to be “wetware”, dualismis valid. Proposition one of the Weak AI Thesisis refuted by gazing into the past.
People havehistorically associated the state ofthe art technology of the time to have elements ofintelligence and consciousness. Anexample of this is shown in the telegraph system ofthelatter part of the last century. People at the time saw correlations between the brainandthe telegraph network itself. Proposition two is readily refuted bythe fact that semantical meaning is not addressed bythis argument. The fact that a clockcan compute and display time doesn’t mean that it hasany concept of coounting or themeaning of time. Defining the nature of rule-followingis the where the weakness lies with the fourthproposition.
Proposition four fails toagain account for the semantical nature of symbolmanipulation. Referring to the ChineseRoom Thought Experiment best refutes thisargument. By examining the nature by whichhumans make conscious decisions, it becomes clear thatthe fifth proposition is an item of fancy. Humans follow a virtuallyinfinite set of rules that rarely follow highlyorderedpatterns. A computer may be programmedto react to syntactical information withseeminly semantical output, but again,is it really cognizant? We, through Searle’s arguments, haveamply established that the future of AI lies not inthe semantic cognition of data bymachines, but in expert systems designed to performordered tasks. Technologically, there is hope forsome of the proponents of Strong AI Thesis.
This hopelies in the advent of neural networksand the application of fuzzy logic engines. Fuzzy logic was created as a subset ofboolean logic that was designed to handle data thatis neither completely true, norcompletely false. Intoduced by Dr. Lotfi Zadeh in1964, fuzzylogic enabled the modelling ofuncertainties of natural language. Dr. Zadeh regards fuzzy theory not asa single theory, but as “fuzzification”, or thegeneralization of specific theoriesfrom discrete forms to continuous (fuzzy) forms.
The meat and potatos of fuzzy logic isin the extrapolation of data from seta of variables. Afairly apt example of this is thevariable lamp. Conventional boolean logical processesdealwell with the binary nature of lights. They are either on, or off. But introduce the variablelamp, which can range in intensityfrom logically on to logically off, and this is whereapplications demanding the applicationof fuzzy logic come in.
Using fuzzy algorithms onsets of data, such as differingintensities of illumination over time, we can infer acomfortable lighting level based uponan analysis of the data. Taking fuzzy logic one step further,we can incorporate them into fuzzy expert systems. This systems takes collections of datain fuzzy rule format. According to Dr.
Lotfi, therulesin a fuzzy logic expert system willusually follow the following simple rule: “if x is low and y is high, then z ismedium”. Under this rule, x is the low value ofa set of data (the light is off) and y is the highvalueof the same set of data (the light isfully on). z is the output of the inference based uponthe degree of fuzzy logic applicationdesired. It is logical to determine that based upontheinputs, more than one output (z) maybe ascertained. The rules in a fuzzy logic expertsystem is described as the rulebase.
The fuzzy logic inference processfollows three firm steps and sometimes an optionalfourth. They are: 1. Fuzzification is the process bywhich the membership functions determined for theinputvariables are applied to their truevalues so that truthfulness of rules may beestablished. 2. Under inference, truth values foreach rule’s premise are calculated and then applied tothe output portion of each rule. 3.
Composition is where all of thefuzzy subsets of a particular problem are combinedintoa single fuzzy variable for aparticular outcome. 4. Defuzzification is the optionalprocess by which fuzzy data is converted to a crispvariable. In the lighting example, alevel of illumination can be determined (such aspotentiometer or lux values). A new form of information theory isthe Possibility Theory.
This theory is similar to, butindependent of fuzzy theory. Byevaluating sets of data (either fuzzy or discrete),rulesregarding relative distribution can bedetermined and possibilities can be assigned. It islogical to assert that the more datathat’s availible, the better possibilities can bedetermined. The application of fuzzy logic onneural networks (properly known as artificial neuralnetworks) will revolutionalize manyindustries in the future.
Though we have determinedthat conscious machines may never cometo fruition, expert systems will certainly gain”intelligence” as the wheels oftechnological innovation turn. A neural network is loosely based uponthe design of the brain itself. Though the brain isan impossibly intricate and complex,it has a reasonably understood feature in itsnetworking of neurons. The neuron is thefoundation of the brain itself; eachone manifests up to 50,000 connections to otherneurons. Multiply that by 100 billion,and one begins to grasp the magnitude of the brain’scomputational ability. A neural network is a network of amultitude of simple processors, each of which with asmall amount of memory.
Theseprocessors are connected by uniderectional data bussesand process only information addressedto them. A centralized processor acts as a trafficcop for data, which is parcelled-outto the neural network and retrieved in its digestedform. Logically, the more processorsconnected in the neural net, the more powerful thesystem. Like the human brain, neural networksare designed to acquire data through experience,or learning. By providing examples toa neural network expert system, generalizations aremade much as they are for yourchildren learning about items (such as chairs, dogs,etc. ).
Modern neural network systemproperties include a greatly enhanced computationalabilitydue to the parallelism of theircircuitry. They have also proven themselves in fieldssuch asmapping, where minor errors aretolerable, there is alot of example-data, and whererulesare generally hard to nail-down. Educating neural networks begins byprogramming a “backpropigation of error”, which isthe foundational operating systemsthat defines the inputs and outputs of the system. Thebest example I can cite is the Windowsoperating system from Microsoft.
Of-course,personal computers don’t learn byexample, but Windows-based software will not runoutside (or in the absence) ofWindows. One negative feature of educatingneural networks by “backpropigation of error” is aphenomena known as, “overfitting”. “Overfitting” errors occur when conflictinginformationis memorized, so the neural networkexhibits a degraded state of function as a result. Atthe worst, the expert system maylock-up, but it is more common to see an impeded stateof operation. By running programs inthe operating shell that review data against a database, these problems have beenminimalized. In the real world, we are seeing anincreasing prevalence of neural networks.
To fullyrealize the potential benefits ofneural networks our lives, research must be intenseandglobal in nature. In the course of myresearch on this essay, I was privy to severalinstitutions and organizationsdedicated to the collaborative development of neuralnetworkexpert systems. To be a success, research anddevelopment of neural networking must address societalproblems of high interest andintrigue. Motivating the talents of the computingindustry willbe the only way we will fully realizethe benefits and potential power of neural networks.
There would be no support, naturally,if there was no short-term progress. Research anddevelopment of neural networks must beintensive enough to show results before interestwanes. New technology must be developedthrough basic research to enhance the capabilities ofneural net expert systems. It isgenerally acknowledged that the future of neuralnetworks depends on overcoming manytechnological challenges, such as datacross-talk (caused by radio frequency generation ofrapid data transfer) and limited databandwidth.
Real-world applications of these”intelligent” neural network expert systems include,according to the ArtificialIntelligence Center, Knowbots/Infobots and intelligentHelp desks. These are primarily easily accessibleentities that will host a wealth of data and adviceforprospective users. Autonomous vehiclesare another future application of intelligent neuralnetworks. There may come a time in thefuture where planes will fly themselves and taxiswill deliver passengers without humanintervention. Translation is a wonderful possibilityof these expert systems. Imagine theability to have a device translate your English spokenwords into Mandarin Chinese! This goesbeyond simple languages and syntacticalmanipulation.
Cultural gulfs inlanguage would also be the focus of such devices. Through the course of Mind andMachine, we have established that artificialintelligence’sfunction will not be to replicate theconscious state of man, but to act as an auxiliary tohim. Proponents of Strong AI Thesisand Weak AI Thesis may hold out, but the inevitablewill manifest itself in the end. It may be easy to ridicule thoseproponents, but I submit that in their research intomakingconscious machines, they are doing thefield a favor in the innovations and discoveriesthey make.
In conclusion, technology will prevailin the field of expert systems only if the philosophybehind them is clear and strong. Weshould not strive to make machines that may supplantour causal powers, but rather onesthat complement them. To me, these expert systemswill not replace man – they shouldn’t. We will see a future where we shall increasingly findourselves working beside intelligentsystems.