Basic AI terms
For example, AI, ML, deep learning, neural networks, computer vision, natural language processing (NLP), model, algorithm, training and inferencing, bias, fairness, fit, large language model (LLM)
AI (Artificial Intelligence)
Artificial intelligence, AI, is the field of computer science dedicated to solving cognitive problems commonly associated with human intelligence, such as learning, creation, and image recognition. The goal with AI is to create self-learning system that derives meaning from data.AI also can perform repetitive and monotonous tasks, which increases business efficiency by freeing up employees to do more creative work. AI is exceptionally powerful when it comes to finding patterns in data and forecasting trends. This helps businesses make smarter decisions and react more quickly to problems.
ML (Machine Learning)
Machine learning is a branch of AI and computer science that focuses on use of data and algorithms to imitate the way humans learn. It gradually improves its accuracy to build computer systems that learn from data. Machine learning models are trained by using large datasets to identify patterns and make predictions. An example would be a product recommendation for a customer who's shopping online.
Neural Networks
A neural network is a group of connected "nodes" or "neurons" that work together to process data. These networks can have many layers — we call them "deep" when they have a lot. Neural networks are used in tasks like image recognition and language translation.
Deep Learning
Deep learning is a type of machine learning that uses algorithmic structures called neural networks. These are based upon the structure of the human brain. In our brains, brain cells called neurons form a complex network where they send electrical signals to each other to help us process information. In deep learning models, we use software modules called nodes to simulate the behavior of neurons.
Computer Vision
Computer vision is a field of AI that helps machines "see" and understand images or videos. It is used in facial recognition, security cameras, and helping self-driving cars understand road signs or people.
Natural Language Processing (NLP)
Natural language processing, NLP, is what allows machines to understand, interpret, and generate human language in a natural-sounding way. This is the technology that powers Alexa devices and those chatbots that let you book a hotel.
Model
A model is the result of training a machine learning algorithm on data. It’s like a smart tool that can make predictions. For example, a model can predict if an email is spam or not based on past emails.
Algorithm
An algorithm is a set of rules or steps the computer follows to solve a problem. In AI, algorithms are used to process data and learn patterns. Different algorithms are used for different types of tasks, like sorting, recognizing images, or recommending movies.
Training and Inferencing
- Training: This is when the AI learns from data. The computer looks at many examples and adjusts itself to improve.
- Inferencing: This is when the trained model is used to make predictions or decisions. For example, after training a model to recognize cats, it can now look at a new photo and say, "Yes, that’s a cat!"
Bias
Bias happens when a machine learning model treats some groups unfairly because the training data isn’t diverse enough. For example, if there are no approved loans from 25-year-old women in Wisconsin in the training data, the model might wrongly learn that people like them shouldn’t get loans—even if they’re qualified. This can happen when certain features, like gender, influence the model too much. To make models fair, we should check the data for bias from the start, avoid using unfair features, and keep testing the model to make sure it's treating everyone equally.
Fairness
Fairness means making sure the AI works well for everyone and does not favor one group over another. It is about building AI that is ethical and inclusive.
Fit
Fit is about how well a model learns from the training data:
- Underfitting: Underfitting happens when a model is too simple to capture the underlying patterns in the data. It doesn't learn enough from the training data, so it performs poorly on both training and new data. This can happen if the model is not given enough features or if it's using a basic algorithm that can't represent the complexity of the data. For example, trying to fit a straight line to data that clearly forms a curve would result in underfitting.
- Overfitting: Overfitting occurs when a model learns the training data too well — including all the small details, noise, or errors. As a result, while it performs very well on the training data, it fails to make accurate predictions on new, unseen data because it hasn't learned to generalize. This often happens when the model is too complex or when it is trained for too long. Imagine a student who memorizes every practice test question but struggles on the real exam — that's overfitting.
- Good fit: Good fit means the model has found the right balance between underfitting and overfitting. It learns the main patterns in the training data without memorizing noise or irrelevant details, so it performs well on both training and new data. A well-fit model generalizes effectively, making it reliable for real-world use. This is the goal in machine learning — to create a model that understands the data well enough to make accurate predictions in different scenarios.
Large Language Model (LLM)
A Large Language Model is a type of deep learning model trained on huge amounts of text data. It can understand and generate human-like language. Examples include ChatGPT and Google Bard. LLMs can write emails, answer questions, translate text, and more.