Fog computing was established by Cisco to expand cloud computing to edge nodes/devices. It has memory constraints and low computing capacity, especially for IoT devices. Let's start with a definition of cloud. When we connect an IoT device to the internet, we must first connect to the cloud. The IoT devices are then controlled by our mobile phone, app, or website. When we wish to connect to our IoT devices, we first link to a cloud server using our mobile phone, app, or website, then the cloud communicates with the IoT devices and sends data to the cloud, and finally the cloud connects to our devices, cellphones, apps, or websites.
We all know that IoT devices are primarily used to collect data from the environment. It can be utilised as sensors, cameras on the road, or parking places, for example. When IoTs are utilised as sensor devices, they produce specified outputs in response to changes in input phenomena such as heat, smoke, air pollution, and so on.
As you can see, the signal must travel a great distance from the mobile phone to the cloud server, then cloud server to IoT, then IoT to server again and then from the cloud server to your device. It uses a significant amount of delay. Assume that if there are a large number of IoT devices to connect, there will be bandwidth congestion. As a result, the connection becomes slower.
On the other hand, we don't necessarily require a cloud server to analyse all types of data or do calculations. We can achieve the same thing with connected intermediate nodes or gateways that are capable of performing some processing or calculations on their own or using previously stored datasets from cloud processing or computing. Fog nodes are a type of intermediate node or gateway. For example, you may be aware that getting a parking space in urban locations is becoming increasingly difficult. In this situation, we can use Internet of Things (IoT) cameras in the parking slots. The camera's functions will include taking photographs of cars in the parking space at predetermined intervals. The raw data is then sent to fog nodes, which analyse it and identify any empty parking spaces before sending it to the cloud to show updates about number of empty slots. We expect artificial intelligence to process the raw data in a nearby fog node in this scenario.
Consider the case where there is no fog node: we must first send all raw data to the cloud for processing or computing. To transmit all raw data to the cloud, a lot of bandwidth is required. Can you imagine the situation if there are billions of IoTs connected to the cloud at same time?
On the other hand, we've already discussed the expectation that all fog nodes will have a specific computing capacity. It can also process data according to a previously saved dataset that is scheduled to be updated in a short period of time. Fog nodes, in this way, reduce network overload, bandwidth congestion, and, most importantly, latency. For instance, we'd like to connect our sensors (IoTs) to the internet. If the sensor receives any input, it sends raw data to a neighbouring data centre (cloud server). If there are no fog nodes, it might be 1000 kilometres away from the sensor devices. If a fog node is present, it processes raw data locally. This improves service quality by lowering bandwidth congestion and latency, allowing IoT devices to operate in real time.