top of page

Model Parameters: What Are Weights And Biases?

Writer: Pulkit SahuPulkit Sahu

Updated: Feb 3

Learn how AI models use parameters like weights and biases to generate outputs.


A new 175-billion-parameter language model has been launched. But what exactly does "175-billion-parameter" mean? Parameters are the lifeline of machine learning models. To understand them, think of a tap mixer: you adjust the hot and cold water valves to get the desired temperature. Similarly, machine learning models adjust their parameters to produce accurate, context-aware outputs.


Quick links



 

#1: Understanding Parameters: Weights and Biases


Imagine you're chatting with an AI model and give it a prompt like:

"Write an email addressing Mr. Ram regarding a concern..."

The model breaks this down into key components like "email," "write," "Mr. Ram," "concern," tone, style, and semantics. These elements are further translated into numbers that the model can understand, known as features or x.


The model then generates a response, such as:

"Hello Mr. Ram, I hope this email finds you well..."

This output is the target variable, often denoted as y. But how does the model transform x into y? It all comes down to the model’s parameters. To simplify, let’s assume a linear relationship between x and y like this:

y = Wx + b
Here: 
W: Weights; b = biases

#2: Weights and Biases


Weights (W) are a type of parameter. As you can see, they are multiplied by the feature x. Weights determine how much importance should be given to each feature in predicting the output y.


Biases (b) act as starting points for the model’s output. If there is no input (i.e., x = 0), the bias ensures the model produces a non-zero output.


Mathematically:

y = Wx + b
If x = 0:
y = W⋅0 + b
y = b
This demonstrates that the bias b sets an initial value for the output, even without any input.


What do weights in machine learning represent?

  • Importance of input features

  • Bias term

  • Importance of outputs

  • All of the above


Check


#3: How Parameters Are Learned


These parameters (weights and biases) are learned through extensive training of the machine learning model. They are the result of the model’s learning process, refined through the analysis of vast datasets.


We started with a simple linear example, but in reality, ML models involve a network of both linear and non-linear equations. These are calculated thousands, or even millions, of times, considering a vast number of features. This is why models can end up with billions of parameters, especially when trained for complex tasks beyond simple email writing—such as generating images, composing poems, or analysing intricate tasks.


Once these parameters are learned, they can be applied to predict outcomes for new, unseen data. For example, if x represents the pixels of an image of a black cat, the model will use its learned parameters to predict y, the output label: "black cat."



In machine learning, weights and biases are learned through training to create a model that can make predictions on new, unseen data.

  • True

  • False


Check


References


Sources


Credits


Sources



Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

Most Read

AI Nine O Subscription

Subscribe to AI Nine O for 90-second learning.

Venus will reply here...

Your question...

VenusMoon Logo

Promoting equitable education for all through the fruitful use of AI.

© 2024-25 by VenusMoon Education | Udyam Registration Number: UDYAM-MP-10-0030480

bottom of page