gjohnson5
New member
As of my last discussion of Automata and Language http://www.elitefitness.com/forum/showthread.php?t=509760
I discussed how a computer can understand language in terms of the implementation of such a technology, but how does a computer in real time understand a human??? This is the reason computer sciences has developed interpreters. Compilers can take commands which are human engineered and decipher them. Once deciphered and verified authentic, the commands are then translated into a lower level language in which the computer can execute them. Both of which need a automatas on the higher level to understand the language sent by the human (coder) but the main difference is the interpreter woks in real time whereas the compiler requires that commands be converted into a lower level language to be executed on the computer hardware level. The interpreter is much more dynamic and flexible but is also slower. This is the reason that the science behind hardware capacity is advanced such that hardware can execute commands at higher megahertz rates and gigahertz rates. These advancements will make computers more pragmatic and functional.
Interpreters in real time decipher and translate language into lower level language that the hardware can understand. Once a schematic is created of the language from higher level automata, this schematic is used as a guideline for the computer to execute higher level commands. These higher level commands are converted into lower level commands in which the hardware for the computer is designed to understand. This lower level language is called "machine language” The commands in which the hardware is designed to understand at it's lowest "atomic" level is what we call machine language. So there is a lower level automata designed to send commands to the hardware such that the specific hardware is designed to understand. This creates a level of complexity and translation such that Language B is converted into Language A and language A then executes the commands sent in language B in language A format...
This is the purpose of an interpreter. The commands sent to an interpreter are analyzed for their "correctness" in language B. This is what we call "semantics" WTF did you mean??? Humans do this as we communicate to each other and computers need similar abilities to decipher correct phrases or "sentences" in a language from gibberish. Thus commands can be sent to a computer and the "interpreter" will "decipher" the commands and create a set of lower level commands in which to execute.
As languages change, the interpreters need updating in order to understand language changes. This will give the interpreter new abilities. As features are requested, higher level automatas are created to demonstrate or illustrate the changes. This is what we calling IT "Proof of Concept” In order to functionally "prove" that technical concepts are infact pragmatic, processes are created in order to "proof" concepts on a functional level. I will get back to the rift between computer science and business later
Anyway on a bridge is created between the science of the interpreter and the real time functionality of the interpreter then code or programs can be created such that a computer can infact decipher and execute commands sent by a human to the machine. Obviously testing and quality assurance will be needed to insure that the commands are infact being "interpreted" as they should but this is part of the "proof of concept"
Anyway the main issue with computer science and IT is that concepts may not necessarily have a functional path from the drawing board to the execution phase. This is the reason scientists are needed Code writers engineer code, they do not create concepts. This is the "rift" in the two areas
I discussed how a computer can understand language in terms of the implementation of such a technology, but how does a computer in real time understand a human??? This is the reason computer sciences has developed interpreters. Compilers can take commands which are human engineered and decipher them. Once deciphered and verified authentic, the commands are then translated into a lower level language in which the computer can execute them. Both of which need a automatas on the higher level to understand the language sent by the human (coder) but the main difference is the interpreter woks in real time whereas the compiler requires that commands be converted into a lower level language to be executed on the computer hardware level. The interpreter is much more dynamic and flexible but is also slower. This is the reason that the science behind hardware capacity is advanced such that hardware can execute commands at higher megahertz rates and gigahertz rates. These advancements will make computers more pragmatic and functional.
Interpreters in real time decipher and translate language into lower level language that the hardware can understand. Once a schematic is created of the language from higher level automata, this schematic is used as a guideline for the computer to execute higher level commands. These higher level commands are converted into lower level commands in which the hardware for the computer is designed to understand. This lower level language is called "machine language” The commands in which the hardware is designed to understand at it's lowest "atomic" level is what we call machine language. So there is a lower level automata designed to send commands to the hardware such that the specific hardware is designed to understand. This creates a level of complexity and translation such that Language B is converted into Language A and language A then executes the commands sent in language B in language A format...
This is the purpose of an interpreter. The commands sent to an interpreter are analyzed for their "correctness" in language B. This is what we call "semantics" WTF did you mean??? Humans do this as we communicate to each other and computers need similar abilities to decipher correct phrases or "sentences" in a language from gibberish. Thus commands can be sent to a computer and the "interpreter" will "decipher" the commands and create a set of lower level commands in which to execute.
As languages change, the interpreters need updating in order to understand language changes. This will give the interpreter new abilities. As features are requested, higher level automatas are created to demonstrate or illustrate the changes. This is what we calling IT "Proof of Concept” In order to functionally "prove" that technical concepts are infact pragmatic, processes are created in order to "proof" concepts on a functional level. I will get back to the rift between computer science and business later
Anyway on a bridge is created between the science of the interpreter and the real time functionality of the interpreter then code or programs can be created such that a computer can infact decipher and execute commands sent by a human to the machine. Obviously testing and quality assurance will be needed to insure that the commands are infact being "interpreted" as they should but this is part of the "proof of concept"
Anyway the main issue with computer science and IT is that concepts may not necessarily have a functional path from the drawing board to the execution phase. This is the reason scientists are needed Code writers engineer code, they do not create concepts. This is the "rift" in the two areas

Please Scroll Down to See Forums Below 










