Please Scroll Down to See Forums Below
napsgear
genezapharmateuticals
domestic-supply
puritysourcelabs
UGL OZ
UGFREAK
napsgeargenezapharmateuticals domestic-supplypuritysourcelabsUGL OZUGFREAK

Language and Interpreters

gjohnson5

New member
As of my last discussion of Automata and Language http://www.elitefitness.com/forum/showthread.php?t=509760
I discussed how a computer can understand language in terms of the implementation of such a technology, but how does a computer in real time understand a human??? This is the reason computer sciences has developed interpreters. Compilers can take commands which are human engineered and decipher them. Once deciphered and verified authentic, the commands are then translated into a lower level language in which the computer can execute them. Both of which need a automatas on the higher level to understand the language sent by the human (coder) but the main difference is the interpreter woks in real time whereas the compiler requires that commands be converted into a lower level language to be executed on the computer hardware level. The interpreter is much more dynamic and flexible but is also slower. This is the reason that the science behind hardware capacity is advanced such that hardware can execute commands at higher megahertz rates and gigahertz rates. These advancements will make computers more pragmatic and functional.

Interpreters in real time decipher and translate language into lower level language that the hardware can understand. Once a schematic is created of the language from higher level automata, this schematic is used as a guideline for the computer to execute higher level commands. These higher level commands are converted into lower level commands in which the hardware for the computer is designed to understand. This lower level language is called "machine language” The commands in which the hardware is designed to understand at it's lowest "atomic" level is what we call machine language. So there is a lower level automata designed to send commands to the hardware such that the specific hardware is designed to understand. This creates a level of complexity and translation such that Language B is converted into Language A and language A then executes the commands sent in language B in language A format...

This is the purpose of an interpreter. The commands sent to an interpreter are analyzed for their "correctness" in language B. This is what we call "semantics" WTF did you mean??? Humans do this as we communicate to each other and computers need similar abilities to decipher correct phrases or "sentences" in a language from gibberish. Thus commands can be sent to a computer and the "interpreter" will "decipher" the commands and create a set of lower level commands in which to execute.

As languages change, the interpreters need updating in order to understand language changes. This will give the interpreter new abilities. As features are requested, higher level automatas are created to demonstrate or illustrate the changes. This is what we calling IT "Proof of Concept” In order to functionally "prove" that technical concepts are infact pragmatic, processes are created in order to "proof" concepts on a functional level. I will get back to the rift between computer science and business later

Anyway on a bridge is created between the science of the interpreter and the real time functionality of the interpreter then code or programs can be created such that a computer can infact decipher and execute commands sent by a human to the machine. Obviously testing and quality assurance will be needed to insure that the commands are infact being "interpreted" as they should but this is part of the "proof of concept"

Anyway the main issue with computer science and IT is that concepts may not necessarily have a functional path from the drawing board to the execution phase. This is the reason scientists are needed Code writers engineer code, they do not create concepts. This is the "rift" in the two areas
 
Hopefully the science will catch up and you language that you can throw at the computer will be English instead of java or cfml or smalltalk (Computer languages) But scientific models or automata will need to be created first to model the english language and then interpreters can be written to be able to analyze the structure (syntax) and decipher meaning (semantics) of English
 
its basically programming for programming.. you'd program a compiler in one language to be able to understand english.. this would simply be the breaking down of the english language by yet another program.. its never gonna happen :( especially not with english. there are too many variations in saying the same thing
 
Wow , a response to this science....

1. English interpreters are just a matter of time. Once a model of the english language (or any other natural language for that matter) is created then code can be "compiled" to emulate that language.

2. Semantics even now as humans speak english is not completely understood. I don't believe that science (in it's current state) can generate automatas for jargon. It's simply too dynamic. But if the automatas were "tailored" for "jargon" of certain areas as golf or football players , then the machines may understand the "lingo" just for that specific area

3. Jargon or slang is geography dependant. Slang in NYC is not the same in Long Beach , CA. Also English in London England is also different from the US. Thus alterations would need to be made for geography

4. Creativity would need to be programmed in what I would describe as a "neural network" in the machine so the machine would have some ability to "express itself" Current examples of this are the IBM Risc6000 Machine "Deep Blue" beating every champion human chess player on the planet. The reason for this is that risk , creativity , offense , defense have already been programmed into a machine neural decision making processes. The "B tree" concept has been expanded into such a science to where the machine can make decisions based on the decisions of it's opponent. This was actually my next thread. The science here is quite fascinating.

Anyway all these things are just a matter of time
 
We already know tht the brain is infinitely more powerful than any computer.. I don't think that there can be enough variables for ..variance to make this a likely prospect..
 
Agreed that major inroads are being made in computational skills but I think you're overstating the case with Big Blue.
Computers win through being able to make a massive amount of calculations in a short period of time. Programmers can assign values to various concepts of importance, such as king safety or occupying the center of the board but the program lacks any truly creative ability. Big Blue could analyze positions 12 plys or 6 moves ahead for each side. This is a tremendous advantage over humans in that competative chess is played with a time limit. We're just not able to think that fast.
In the case of the match against then world champion Gary Kasparov, Big Blue's team tailored it's program to match up well against Kasporov's style of play and was able to modify the program throughout the match. Kasparov wasn't granted access to Big Blues games against previous opponents. This is an important aspect of championship chess- preparing for your opponents through analyzing their prior matches.
Kasparov made the additional mistake of using an 'anti-computer strategy' that was better suited to the previous generation of computers. Basically, he adapted a wait and see attitude. In earlier times, a computer would eventually blunder due to a certain amount of randomness in it's moves. Nowadays they have a better grasp of opening positions and will bring that ability well into the middle game. This is where a chess grandmaster can capitalize- on small positional mistakes. A computer won't make any major mistakes anymore and it's next to impossible to defeat once you fall behind due to it's computational ability so you must be patient and find the small weaknesses of it's move selection. Also, they still tend to undervalue king safety and will sometimes snatch pieces- going for immediate material gain rather than possessing an understanding of the dynamic nature of a position. Human minds are far more adaptive, as Mike pointed out, and will make better 'sense' of a complex position but often can't make the necessary calculations.
Even today's best computer, Hydra, run by a consortium in Abu Dhabi, owes it's improvements to better computational systems. It uses a 64 node Xenon cluster, or several processers running concurrently using a specifically designed chips implemented as FPGAs- field-programmable gate arrays. This allows Hydra to evaluate 200,000,000 moves a second, about the same as Big Blue but with vastly increased computational power- to a depth of 18 ply or 9 moves a side. This increase is due it's more modern type B forward pruning techniques and null-move heuristics, concepts I personally have no understanding of. More information can be found at http://www.hydrachess.com
This turned out to be a long post-sorry. Trying to make the point that good skills don't necessarily make beast 'creative'.
 
Last edited:
Yes , we need a thread about neural networks!
God what a great post.

But currently what coders are attempting to do is create "templates" analagous to the GNU autoconf, automake, and configure technologies which "generates" code. With this in place, code can infact "change" as it's running if used in a dynamic fashion. Generally autoconf and automake generate code and the configure program tells the machine how to compile the code, but they do it when the user tells the computer to do so. What must be stated is that these programs do not write computer programs , but they just streamlined base for one to change thier code and recompile that code without much effort. This is an advancement to Rapid Application Development http://en.wikipedia.org/wiki/Rapid_application_development

If the computer had the ability to "adapt" on the fly, I think that the chess players having exposure to Deep Blue's previous matches would become irrelevant. The science of code generating code is also currently in the works. Businesses have a need for this technology such that business critical application need not go down in order for alterations to be made to it. This is the reason why interpreters should be used instead of compilers. The interpreter can act on the fly whereas a compiler only generates commands to be executed. Both paths are essentially 2 ways to skin a cat. Code that generates code or an interpreter executing code that is changing on the fly.

In terms of processing power, IBM , HP, and SUN produce business solutions. In order to be completely backwards compatible and to continue support for thier current customer bases, they ahve to produce solutions that have a clear upgrade path or they have to tell thier customers to scrap thier current hardware. The Sun Ultrasparc T1 is an 8 core CPU with the abilty to execute 4 threads per core. But since the gigahertz rates are not as high as current Intel or AMD technologies, people assume the processors are slower when infact they are not. http://www.sun.com/processors/UltraSPARC-T1/index.xml
IBM's new RISC 600 chip the Power 6 is reported to run at 6 gigahertz http://news.zdnet.com/2100-9584_22-6124451.html
Basically making the assumption that current AMD or Intel processor solutions are the best out there is simply false if you are not aware of the business critical world.

The other issue is distributed computing. IBM and SUN are 2 powerhouses in this field and have produced business solutions in distributed computing for decades. They have created solutions which are the most efficient in SMP and cluster technology such that multiple processors can communicate and execute threads simultaneously. SUn has machines with 72 processors for instance which are currently part of thier business strategy.
http://store.sun.com/CMTemplate/CEServlet?process=SunStore&cmdViewProduct_CP&catid=111375

Anyway I think that current hardware technologies that include reduced energy consumption and removal of moving parts will serve to decrease computational times and give machines the ability to adapt as a process is proceeding. This is also in the works. Thank you for the reply


fortunatesun said:
Agreed that major inroads are being made in computational skills but I think you're overstating the case with Big Blue.
Computers win through being able to make a massive amount of calculations in a short period of time. Programmers can assign values to various concepts of importance, such as king safety or occupying the center of the board but the program lacks any truly creative ability. Big Blue could analyze positions 12 plys or 6 moves ahead for each side. This is a tremendous advantage over humans in that competative chess is played with a time limit. We're just not able to think that fast.
In the case of the match against then world champion Gary Kasparov, Big Blue's team taylored it's program to match up well against Kasporov's style of play and was able to modify the program throughout the match. Kasparov wasn't granted access to Big Blues games against previous opponents. This is an important aspect of championship chess- preparing for your opponents through analyzing their prior matches.
Kasparov made the additional mistake of using an 'anti-computer strategy' that was better suited to the previous generation of computers. Basically, he adapted a wait and see attitude. In earlier times, a computer would eventually blunder due to a certain amount of randomness in it's moves. Nowadays they have a better grasp of opening positions and will bring that ability well into the middle game. This is where a chess grandmaster can capitalize- on small positional mistakes. A computer won't make any major mistakes anymore and it's next to impossible to defeat once you fall behind due to it's computational ability so you must be patient and find the small weaknesses of it's move selection. Also, they still tend to undervalue king safety and will sometimes snatch pieces- going for immediate material gain rather than possessing an understanding of the dynamic nature of a position. Human minds are far more adaptive, as Mike pointed out, and will make better 'sense' of a complex position but often can't make the necessary calculations.
Even today's best computor, Hydra, run by a consortium in Abu Dhabi, owes it's improvements better computational systems. It uses a 64 node Xenon cluster, or several processers running concurrently using a specifically designed chips implemented as FPGAs- field-programmable gate arrays. This allows Hydra to evaluate 200,000,000 moves a second, about the same as Big Blue but with vastly increased computational power- to a depth of 18 ply or 9 moves a side. This increase is due it's more modern type B forward pruning techniques and null-move heuristics, concepts I personally have no understanding of. More information can be found at http://www.hydrachess.com
This turned out to be a long post-sorry. Trying to make the point that good skills don't necessarily make beast 'creative'.
 
Last edited:
Top Bottom