*this is meant as a study guide that was created from lecture videos and is used to help you gain an understanding of how does ai work.
In 1956, general problem solver was first computer program designed to solve any problem that could be set as mathematical formulas. The problem with this system was from what was called the physical symbol system hypothesis. This theory believed that symbols were the key to general intelligence. The argument against this was explained through what was called the Chinese Room Argument. This argument explains that they are simply matching patterns with a preprogrammed response and is not a sign of intelligence. Combinatorial explosion is when there are larger and larger matching statements. It is when the matching becomes overwhelming. This was the foundation for 25 years.
In 1950’s, single computers were taking up entire floors and did not have as much power at today’s smartphones. The term artificial intelligence was created to inspire new research. Artificial intelligence ss any system that exhibits behavior that could be interpreted as human intelligence. The challenge is how do you define human intelligence? There is no one standard for human intelligence. The challenge is that computer intelligence and human intelligence act in different ways. Computers can identify and match patterns. Anything with set rules and there are certain possibilities a computer can be programmed to find patterns. Because we are human, we do not think about tasks the same way a computer would.
Strong AI is when you have a machine that you would expect from an artificial person, otherwise known as general artificial intelligence. Currently, it is still just a subject of science fiction.
Weak Ai is confined to very narrow task, and is otherwise known as narrow AI. It has the objective to help you with one specific narrow task. An example of this is Siri from Apple.
Expert systems have experts create different steps to make a conclusion, when the list is long enough then it starts to feel like artificial intelligence. The program is going through a long list, it is matching the symbols and patterns to reach a conclusion.
The Symbolic System Approach allowed machines to act like they were intelligent but in actuality, they are simply following patterns. It allowed computer programmers to create expert systems, which were normally left for expert. The problem is that they made long lists of matching patterns and experts needed to create them. The expert systems created from the symbolic system approach made things like diagnosing illnesses or give a credit score. They disappeared in the 1980’s, although the approach is still relevant with planning artificial intelligence.
Heuristic reasoning which gives AI common sense, by limiting what patterns the program has to match at any one time, which is limiting the search space. You would use heuristic reasoning when planning artificial intelligent systems. Ensure that your heuristic reasoning is accurate, as if it is not, then everything else under the assumption will also be wrong.
Symbolic System Approach and planning artificial intelligence works well with directions, contracts, logistics and video games.
If there are too many lists and variables then planning out and matching patterns is not a good idea. The idea of allowing the computer to learn from the patterns is called machine learning. In 1959, Arthur Samuel created a program that could play checkers and was designed to play against itself to learn from itself and gain new strategies. You did not need a symbolic system as the machine itself, would create the list and the matching patterns. There is still the same challenge as the machine does not know what is being said, rather just matching patterns. The difference between the symbolic system approach and machine learning is that machine learning learns the patterns by looking at the data. The machine can then continue to learn.
Artificial neural networks- computer program that mimics the structure of the human brain. The neurons in an artificial neural network are organized by layers; start with the input layer, until you get to the final output layer. The layers in between are known as the hidden layers. The hidden layers are used to learn new patterns by passing on information that each layer is confident in. The idea is that if you pass the test through enough time you will get a complete picture of the program/idea. The challenge is that it is time consuming. An artificial neural network can train itself to understand the input and recognize the input when looking through massive amounts of data.
In late 1980’s, researchers started thinking about machine learning again. In 1958, frank rosenblatt created an artificial neural network. Instead of using nodes and neurons, he used the term of perceptrons. Rosenblatt believed that if you tied these perceptrons then you could crete machine intelligence. He believed these perceptrons were the most promising way to artificial intelligence. He created the Mark 1 Perceptron. Thousands of perceptrons were tied together to create an early neural network.
Marvin Minsky in 1969 argued against Rosenblatt’s approach in a book titled “Perceptrons”, and Rosenblatt died before the chance to argue his theory. The problem with perceptrons is that there is no hidden layer, which is a key factor of how artificial neural networks learn and adapt.
In 1980’s, Geoff Hinton created a version of Rosenblatt’s Perceptron. His version included hidden layers. In the 1990’s, Hinton worked with a new field called deep learning, which includes many hidden layers and had a large gap between the input and output layer. There was more abilities for the neural network to learn. One technique called Back Propagation made sure nodes spread their knowledge quickly and strengthened the neural connections. Clustering was used in deep learning to cluster the neurons to better identify patterns, and created categories. Clustering can find certain categories of patterns that hep the neural network identify, such as an image.
Machine learning feeds on data as a way to learn new things. The more data, the easier for the network to find patterns. In 1990’s, Google used machines to find categories and connections. When people clicked on a link, it created a stronger connection, meaning it created a stronger weight. This is the same way that neurons strengthen in an artificial neural network. Deep learning looks at so much data and creates many new connections, we are unsure of how the programs come up with its patterns.
Symbolic reasoning = abstract problem but know steps vs machine learning = looks for patterns
For deep learning, you want to use as much data as you can through the network. The deep learning neural network will process the data and then create a model to make an accurate guess.
Symbolic reasoning works well for most and uses strict definitions. Machine learning will zero in on a perfect pattern and is flexible and it does not rely on humans. Symbolic reasoning can be used when you do not have access to large amounts of data. Machine learning relies on lots of data that needs tweaking, meanwhile symbolic reasoning requires a long setup and does not use outside data.
Supervised learning- you give the neural network a small set of data, known as a training set, that share the same attributes. Once the network is trained, you can feed data into it to be categorized. Learning supervised learning before unsupervised learning is a popular way of learning how does ai work.
Unsupervised learning- you want to give all the data through the neural network and you let the network itself create categories for you. They create arbitrary categories and it can be quite accurate, although the categories may be completely different than how a human categorizes.
Term used around artificial neural networks and is a common way to deal with error correction. When doing supervised learning, there needs to be a way to know when the artificial neural network has made a mistake. Backpropagation is used as a way for an artificial neural network to classify the wrong data. It will make slight adjustments to improve recognition. You want to train the network through the metaphor of a gradient adjustment. It is only used for supervised learning! A human being has to identify the errors, and then help the network to determine. This is a good distinction to point when understanding how does ai work.
When you look at the relationship from a dependent variable and other independent variables. You can understand the relationship between dependent and independent variables. In regression analysis, you are using data to identify relationships meanwhile in artificial neural networks you are using the data to classify. It is classification (sorting) vs regressions (connecting). Classification is when the data is used to categorize. If you looking for patterns to connect, then you would use regression analysis. Your artificial neural network will only show you the patterns. Humans are used to find meanings between the connections.
Robotics is simply having machines work on physical tasks. Combining robotics and AI allows it to adapt and learn new tasks. Autonomous vehicles use supervised machine learning on an artificial neural network. A car is outfitted with sensors and is then fed data into the neural network and the machine then looks for patterns. Backpropagation allows for cars to learn from its mistakes.
We can’t communicate with machines the same way that they communicate with each other. The machines need to do better communicating with humans. NLP is when you can interact with a machine with your own language. The machine can understand and figure out the language of humans. NLP gives machines a better understanding of a human world. Find out more by going to SAS.
Connecting between everyday things. IoT allows smart devices to communicate between each other. It creates even more data. It will be even easier to get data, than for humans to be able to analyze it. For more information regarding Internet of Things visit zdnet.
Big data is essentially collecting more data than we can handle, easier to create data than to store and interpret it. Big data is a driver for machine learning and it is about managing and analyzing massive data sets. Data mining is when you look through data to find insights. The difference between data mining and machine learning is the technology that is used. Machine learning requires training and frameworks. Data mining uses broader tools and does not require training. Data mining is about digging through your data. Machine learning is training your machine to dig through the data. You need terabytes of data to have a big data project. Instead of directly mining the ‘big database’, you will train a machine or neural network to find patterns. The challenge is that you need to create training sets for machine learning instead of directly interacting with the big data. Learn more about the big data revolution by visiting IBM.
Knowledge of programming, data, math, statistics, and hacking using the scientific method. Data Science is different than AI as AI does most of the pattern matching. Data Science will know more about the data behind their insights, the neural network will find unexpected patterns although it cannot explain why these patterns exist. AI will find the pattern better although it will not provide answers to why. Data Science will then explore why it works better.
Does the program require abstract reasoning or detailed pattern matching? Don’t assume AI will provide the best approach for your need.