Introduction to Artificial Neural Networks Weights and Bias

Introduction to Artificial Neural Networks Weights and Bias

An artificial neural network is a machine learning system made up of numerous interconnected neurons arranged in layers to simulate the structure and function of a biological brain. Learning occurs in a biological brain, in part, by the strengthening of connections between neurons. When you are learning a new subject or a new skill, the neurons in your brain form new connections with other neurons. The more you study or practice, the stronger these connections become.

In an artificial neural network, learning occurs in a similar fashion:

  1. Each neuron (also referred to as a "node") receives one or more inputs from an external source or from other neurons.

  2. Each input is multiplied by a weight to indicate the input's relative importance.

  3. The sum of the weighted input(s) is fed into the neuron.

  4. Bias is added to the sum of the weighted inputs.

  5. An activation function within the neuron performs a calculation on the total.

  6. The result is the neuron's output, which is passed to other neurons or delivered to the external world as the machine's output.

  7. The output passes through a loss function or cost function that evaluates the accuracy of the neural network's prediction, and results are fed back through the network, indicating adjustments that must be made to the weights and biases.

  8. The process is repeated in multiple iterations to optimize the accuracy of the output; weights and biases are adjusted with each iteration.

Activation Function - Weights and bias - Regression

Each node in an artificial neural network has an activation function that performs a mathematical operation on the sum of its weighted inputs and bias and produces an output. (A function is a special relationship in which each input has a single output.)

Different functions produce different outputs, and when you graph the outputs, you get different shapes. The most basic function used in machine learning is the Heaviside step function. This function outputs 1 (one) if its weighted input plus bias is positive or zero, and it outputs 0 (zero) if its weighted input plus bias is negative. In other words, the neuron either fires or doesn't. When you graph this function, you get something that looks like a step in a stairway, as shown below. The output can be one or zero, nothing in between.

The sigmoid function provides infinitely more variation in values than you get with a binary function like the Heaviside step function. When graphed, output from a sigmoid function forms an "S" shape, as shown below. Sigmoid functions are commonly used in neural networks because they allow for making small adjustments within a limited range (0.0 to 1.0 on the vertical access) during the machine learning process.

Note that regression functions, which are also used in machine learning, are not as useful in an artificial neural network, because their output range is infinite. The narrow range of a sigmoid function (from 0.0 to 1.0 on the vertical axis) makes the model more efficient and stable.

Weights

Weights enable the artificial neural network to dial up or dial down connections between neurons. For example, suppose you create an artificial neural network to distinguish among different dog breeds. Each neuron in one layer of the neural network may focus on a different characteristic — snout, ears, eyes, tail, size, shape, color, and so on. With weighted inputs, the network can increase or decrease the strength of the connection between each neuron in this layer and the neurons in the next layer to place less emphasis on the tail, for example, and more on the size and shape. 

Bias

While weights enable an artificial neural network to adjust the strength of connections between neurons, bias can be used to make adjustments within neurons. Bias can be positive or negative, increasing or decreasing a neuron’s output. The neuron gathers and sums its inputs, adds bias (positive or negative) and then passes the total to the activation function.

In terms of the sigmoid function graph presented earlier in this chapter, weight impacts the steepness of the curve, while bias shifts the curve left or right without changing the shape of the curve.

Cost Function

A cost function or loss function indicates the accuracy of the model. Its output tells the neural network whether weights and biases need to be adjusted to improve the model's accuracy. Think of the cost function as the means to reward the machine for success and/or punish it for failure. It enables the machine to learn from its achievements or mistakes.

To understand the interaction of functions, weights, and biases, imagine a neural network as a sound system with various dials for adjusting different parameters — volume, tone, balance, and so on. During the machine learning process, a neural network may turn dozens, hundreds, or even thousands of dials, making tiny adjustments to weights and biases, and then checking the end result. It repeats this process over and over to optimize the output's accuracy with every iteration.

Q: What are neural network weights and how do they influence predictions?

In an artificial neural network, weights help measure strength between neurons. They show how much each input matters.  Think of neurons as tiny decision-makers in the network. The weights decide how much an input can make a neuron active or help with a prediction.

During training, the network changes these weights based on the training data. It works hard to make fewer mistakes. Weights play a big role in helping the network make good predictions.

Q: How does bias contribute to the function of a neural network?

Bias in a neural network acts as an extra parameter. It adjusts the output with the weighted sum of the inputs.

Bias units are key because they help the network show patterns.

Here's why bias units are useful:

1. Better Patterns: They let the network see more patterns.

2. Flexible Learning: They help the network learn better.

3. Improved Accuracy: They make the network give more correct answers.

4. Extra Help: They give the network a boost to work well.

Bias units make networks smarter and more accurate.

When there's no bias, the neural network can't fit the data well. It struggles if the output needs a different setup than the starting weights. Bias helps adjust the network's output.

Q: Can you explain the basic architecture of an artificial neural network?

The basic structure of an artificial neural network has three layers. There is an input layer, hidden layers, and an output layer.

- The input layer takes in the information. - The hidden layers do the calculations. - The output layer gives the final result.

Each layer has units called neurons. Hidden layers have different numbers of neurons. They change the input into something the output layer can use. Neurons in different layers connect with weights. During training, these weights change to make better predictions.

Q: Why do we need activation functions in a neural network?

Activation functions in neural networks add non-linearity. This helps the network learn complex patterns. Without them, the network would just do simple tasks like linear regression. It couldn't solve hard problems like image recognition or understanding natural language.

Q: What is the process of training a neural network with a dataset?

Training a neural network means you adjust its settings based on mistakes it makes. You start by putting data through the network to get predictions. Then, you measure how wrong these predictions are using a loss function, which is a method to find errors.

You then send this error backward through the network using a method called backpropagation. This helps find out how much each part of the network contributed to the error. You use optimization methods like Stochastic Gradient Descent to update the settings in a way that reduces the error. You repeat this many times until the network works well.

To Enjoy Learning About Neural Networks: 1. Understand Basic Terms: Learn what words like "weights," "biases," and "loss function" mean. 2. Visualize the Process: Draw or look at diagrams showing how data moves through the network. 3. Practice with Examples: Use simple examples to see how changes in settings affect predictions.

Some people might think this process is too complicated. But by breaking it down into simple steps, you can understand and enjoy how neural networks learn.

Q: What role do hidden layers play in a neural network?

Hidden layers help neural networks understand complex data. Each hidden layer learns something new from the data. The next layer builds on what the previous one learned.

This step-by-step learning makes neural networks very good at tasks like recognizing images and voices.

These tasks need the network to change simple patterns into meaningful ones. The number of hidden layers and neurons in each layer is important. They decide how much the network can do and how complex it can be.

- Hidden layers learn new features. - Next layers build on previous layers. - More hidden layers and neurons mean more power.

Q: How are the dimensions of input and output layers determined in a neural network?

The number of neurons in the input and output layers of a neural network comes from the dataset and the task you want to do.

The input layer needs one neuron for each feature in the dataset. For example, if your data points have 20 features, you need 20 neurons in the input layer.

The output layer's size depends on what you want the network to predict. If you have a classification task with 10 possible classes, the output layer needs 10 neurons. Each neuron will represent the probability of one class. So, the data and the task will decide how many neurons you need in the input and output layers.

This is my weekly newsletter that I call The Deep End because I want to go deeper than results you’ll see from searches or AI, incorporating insights from the history of data and utilizing data science methods. Each week I’ll go deep to explain a topic that’s relevant to people who work with technology. I’ll be posting about artificial intelligence, data science, and data ethics. 

This newsletter is 100% human written 💪 (* aside from a quick run through grammar and spell check).

More Sources:

  1. https://www.xenonstack.com/blog/artificial-neural-network-applications

  2. https://arxiv.org/abs/2202.10435

  3. https://h2o.ai/wiki/weights-and-biases/

  4. https://towardsdatascience.com/whats-the-role-of-weights-and-bias-in-a-neural-network-4cf7e9888a0f

  5. https://machine-learning.paperspace.com/wiki/weights-and-biases

  6. https://stackoverflow.com/questions/48554395/which-layers-in-neural-networks-have-weights-biases-and-which-dont

  7. https://arxiv.org/abs/2312.02517

  8. https://www.smartsheet.com/neural-network-applications

  9. https://www.coursera.org/articles/neural-network-example

𝖥𝖺𝖻𝗂o 𝖢𝗈𝗅𝗈𝗆𝖻𝗈

Strategic marketing consultant with a track record of driving growth for B2B tech companies. Proven ability to develop data-driven campaigns that deliver measurable results. AI quick learner.

2mo

Indeed a comprehensive course Doug Rose to go through neural thought and also an insightful newsletter for being informed on topics related to architecture, methodology and/or news as well about #GenAi. Well done, keep it up.

Saumya Joshi

Former President at Enactus Dyal Singh College (2024) || French (B1)|| DU’24 || DRDO Research intern|| Girl Up Femina

2mo

Very informative

Ritu Kumari Jha

Passionate Frontend Developer & Lifelong Learner | Embracing Technology, AI, and Communication Skills for Continuous Growth"

2mo

Great advice! 👍 Thank you

Paulo Piccinini

Project Management | Project Lead Consultant | PMP | MBA | 5G | IA

2mo

Extremely helpful, excellent news, engaging topics, and appreciation for the share!

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics