Download as pdf or txt
Download as pdf or txt
You are on page 1of 13

MODULE 4

1)Explain in detail Core Function of edge analytics with diagram


Edge Analytics Core Functions
To perform analytics at the edge, data needs to be viewed as real-time flows.
Whereas big data analytics is focused on large quantities of data at rest, edge
analytics continually processes streaming flows of data in motion
Streaming analytics at the edge can be broken down into three simple stages:

• Raw input data: This is the raw data coming from the sensors into the analytics
processing unit.
• Analytics processing unit (APU): The APU filters and combines data streams,
organizes them by time windows, and performs various analytical functions. It is
at this point that the results may be acted on by micro services running in the APU.
• Output streams: The data that is output is organized into insightful streams and
is used to influence the behavior of smart objects, and passed on for storage and
further processing in the cloud. Communication with the cloud often happens
through astandard publisher/subscriber messaging protocol, such as MQTT.
In order to perform analysis in real-time, the APU needs to perform the following
functions:
• Filter : The filtering function identifies the information that is considered
important.
• Transform: once the data is filtered it needs to be formatted for processing.
• Time: Timing context needs to be established for streaming data.
• Correlate: streaming data is useful when it is combined.
• Match patterns: is used to give greater insight of the data.
• Improve business intelligence: the ultimate idea.

2)Different components of Flexible Net flow Architecture(FNF)


3)Different steps and phases of OCTAVE Allergo methodology
4)Discuss big data analytics tools and technology
Big data analytics can consist of many different software pieces that
together collect, store, manipulate, and analyze all different data
types. Generally, the industry looks to the “three Vs” to categorize big
data:
Velocity
• Refers to how quickly data is being collected and analyzed
• Hadoop Distributed File System is designed to ingest and process
data very quickly.
• Smart objects can generate machine and sensor data at a very
fast rate and require database or file systems capable of equally
fast ingest functions.
Variety
• refers to different types of data.
• Often you see data categorized as structured, semi-structured, or
unstructured.
• Different database technologies may only be capable of accepting
one of these types.
• Hadoop is able to collect and store all three types
Volume
• refers to the scale of the data.
• Typically, this is measured from gigabytes on the very low end to
petabytes or even exabytes of data on the other extreme

5)Explain in detail how IT and OT security practises and System vary in


real time
Or explain PURDUE model for control hierarchy and OT network
characteristics
6)Explain OCTAVE & FAIR Formal risk analysis or Fomal
Risk analysis?
7)Explain the elements of Hadoop with neat diagram

You might also like