Glossary Items

S

  1. A cryptographic network protocol for initiating text-based shell sessions on remote machines in a secure way. Secure Shell (SSH) provides a secure channel over an unsecured network in a client-server architecture, connecting an SSH client application with an SSH server.
    SSH provides a secure channel over an unsecured network in a client-server architecture, connecting an SSH client application with an SSH server. Common applications include remote command-line login and remote command execution, but any network service can be secured with SSH. The protocol specification distinguishes between two major versions, referred to as SSH-1 and SSH-2.
  2. Security controls are safeguards or countermeasures to avoid, detect, counteract, or minimize security risks to physical property, information, computer systems, or other assets.
    Controls include any process, policy, device, practice, or other actions which modify risk. Individual controls are often designed to act together to increase effective protection. Systems of controls can be referred to as frameworks or standards. Frameworks can enable an organization to manage security controls across different types of assets with consistency. For example, a framework can help an organization manage controls over access regardless of the type of computer operating system. This also enables an organization to assess overall risk. Risk-aware organizations may choose proactively to specify, design, implement, operate and maintain their security controls, usually by assessing the risks and implementing a comprehensive security management framework such as ISO27001:2013, the Information Security Forum's Standard of Good Practice for Information Security, or NIST SP 800-53. In telecommunications, security controls are defined as Security services as part of OSI Reference model by ITU-T X.800 Recommendation. X.800 and ISO ISO 7498-2 (Information processing systems. Open systems interconnection - Basic Reference Model - Part 2: Security architecture are technically aligned. For business-to-business facing companies whose service may affect the financial statements of the other company, the prospect may require successful audit reports of policy controls such as a SSAE 16 report before granting them authorization as a vendor.
  3. Function security consists of privileges unconditionally granted to a role and used to control access to a page or a specific widget or functionality within a page, including services, screens, and flows, and typically used in the control of the main menu.
    Function security involves granting a user, by means of the user's membership in a role, the ability to perform operations in pages or task flows such as view or manage. A function security policy consists of privileges assigned to duty roles and those duty roles assigned to a job or abstract role. Function security policies are defined in the Authorization Policy Manager (APM). The functions that a user can access via roles are interface elements, such as the pages or widgets of a task flow. Functions are organized separately from menu navigation and access to functions is granted to users via roles. Policies comprised of grants with access entitlement to components are stored in the policy store, and application roles within role hierarchies are defined with access rights through policies. The access entitlement to a component consists of allowable actions, or privilege, on the component. Users of Oracle Fusion Applications must be able to access the functions necessary for performing their jobs and be excluded from functions that are irrelevant or improper to their roles in the enterprise. This may require changes to the roles available for provisioning. For the broadest possible access to the functionality in Oracle Fusion Applications, the role to which broad entitlement is granted would be a role high in the role hierarchy, such as worker. Such broad entitlement should not include access rights to any functions that violate the security policies of the enterprise, but allow performance of all duties associated with the role.
  4. Rules, directives and practices that govern how assets, including sensitive information, are managed, protected and distributed within an organization and its systems, particularly those which impact the systems and associated elements.
    In business, a security policy is a document that states in writing how a company plans to protect the company's physical and information technology (IT) assets. A security policy is often considered to be a "living document", meaning that the document is never finished, but is continuously updated as technology and employee requirements change. A company's security policy may include an acceptable use policy, a description of how the company plans to educate its employees about protecting the company's assets, an explanation of how security measurements will be carried out and enforced, and a procedure for evaluating the effectiveness of the security policy to ensure that necessary corrections will be made.
  5. Modular self-reconfiguring robotic systems are autonomous kinematic machines with variable morphology. Self-reconfiguring robots are also able to deliberately change their own shape by rearranging the connectivity of their parts in order to adapt to new circumstances or recover from damage.
    Beyond conventional actuation, sensing and control typically found in fixed-morphology robots, self-reconfiguring robots are also able to deliberately change their own shape by rearranging the connectivity of their parts, in order to adapt to new circumstances, perform new tasks, or recover from damage. For example, a robot made of such components could assume a worm-like shape to move through a narrow pipe, reassemble into something with spider-like legs to cross uneven terrain, then form a third arbitrary object (like a ball or wheel that can spin itself) to move quickly over a fairly flat terrain; it can also be used for making "fixed" objects, such as walls, shelters, or buildings.
  6. A self-organizing network (SON) is an automation technology designed to make the planning, configuration, management, optimization and healing of mobile radio access networks simpler and faster.
    A self-organizing Network is an automation technology designed to make the planning, configuration, management, optimization and healing of mobile radio access networks simpler and faster. SON functionality and behavior has been defined and specified in generally accepted mobile industry recommendations produced by organizations such as 3GPP (3rd Generation Partnership Project) and the NGMN (Next Generation Mobile Networks).
  7. A self-powered dynamic system is defined as a dynamic system powered by its own excessive kinetic energy, renewable energy or a combination of both. The particular area of work is the concept of fully or partially self-powered dynamic systems requiring zero or reduced external energy inputs.
    The particular area of work is the concept of fully or partially self-powered dynamic systems requiring zero or reduced external energy inputs.
  8. The Semantic Sensor Web (SSW) is a marriage of sensor and Semantic Web technologies. Encoding sensor data with Semantic Web languages enables more expressive representation and analysis.
    The Semantic Sensor Web (SSW) is a marriage of sensor and Semantic Web technologies. The encoding of sensor descriptions and sensor observation data with Semantic Web languages enables more expressive representation, advanced access, and formal analysis of sensor resources. The SSW annotates sensor data with spatial, temporal, and thematic semantic metadata. This technique builds on current standardization efforts within the Open Geospatial Consortium's Sensor Web Enablement (SWE) and extends them with Semantic Web technologies to provide enhanced descriptions and access to sensor data. Ontologies and other semantic technologies can be key enabling technologies for sensor networks because they will improve semantic interoperability and integration, as well as facilitate reasoning, classification and other types of assurance and automation not included in the Open Geospatial Consortium (OGC) standards. A semantic sensor network will allow the network, its sensors and the resulting data to be organised, installed and managed, queried, understood and controlled through high-level specifications. Ontologies for sensors provide a framework for describing sensors. These ontologies allow classification and reasoning on the capabilities and measurements of sensors, provenance of measurements and may allow reasoning about individual sensors as well as reasoning about the connection of a number of sensors as a macroinstrument. The sensor ontologies, to some degree, reflect the OGC standards and, given ontologies that can encode sensor descriptions, understanding how to map between the ontologies and OGC models is an important consideration. Semantic annotation of sensor descriptions and services that support sensor data exchange and sensor network management will serve a similar purpose as that espoused by semantic annotation of Web services. This research is conducted through the W3C Semantic Sensor Network Incubator Group (SSN-XG) activity.
  9. Sensor analytics is the statistical analysis of data that is created by wired or wireless sensors. A primary goal of sensor analytics is to detect anomalies.
    A primary goal of sensor analytics is to detect anomalies. The insight that is gained by examining deviations from an established point of reference can have many uses, including predicting and proactively preventing equipment failure in a manufacturing plant, alerting a nurse in an electronic intensive care unit (eICU) when a patient’s blood pressure drops, or allowing a data center administrator to make data-driven decisions about heating, ventilating and air conditioning (HVAC). Because sensors are often always on, it can be challenging to collect, store and interpret the tremendous amount of data they create. A sensor analytics system can help by integrating event-monitoring, storage and analytics software in a cohesive package that will provide a holistic view of sensor data. Such a system has three parts: the sensors that monitor events in real-time, a scalable data store and an analytics engine. Instead of analyzing all data as it is being created, many engines perform time-series or event-driven analytics, using algorithms to sample data and sophisticated data modeling techniques to predict outcomes. These approaches may change, however, as advancements in big data analytics, object storage and event stream processing technologies will make real-time analysis easier and less expensive to carry out. Most sensor analytics systems analyze data at the source as well as in the cloud. Intermediate data analysis may also be carried out at a sensor hub that accepts inputs from multiple sensors, including accelerometers, gyroscopes, magnetometers and pressure sensors. The purpose of intermediate data analysis is to filter data locally and reduce the amount of data that needs to be transported to the cloud. This is often done for efficiency reasons, but it may also be carried out for security and compliance reasons. The power of sensor analytics comes from not only quantifying data at a particular point in time, but by putting the data in context over time and examining how it correlates with other, related data. It is expected that as the Internet of Things (IoT) becomes a mainstream concern for many industries and wireless sensor networks become ubiquitous, the need for data scientists and other professionals who can work with the data that sensors create will grow -- as will the demand for data artists and software that helps analysts present data in a way that’s useful and easily understood.
  10. Sensor fusion is combining of sensory data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually.
    Sensor fusion is combining of sensory data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. For example, because of the quantity of sensors, a NASA un-crewed vehicle on Mars requires sensor fusion to detect if there has been a failure. Sensor fusion is also known as (multi-sensor) Data fusion and is a subset of information fusion.
  11. A technology that connects sensor data and processes them. This way the hub does part of a processors data-processing job. Sensor hubs share some of the workload otherwise performed by a computer or other device’s main CPU.
    This way the hub does part of a processors data-processing job.
  12. Sensor Web Enablement (SWE) is a suite of web service interfaces abstracting from the heterogeneity of sensor (network) communication. The phrase the "sensor web" is also associated with a sensing system which heavily utilizes the World Wide Web.
    The concept of the sensor web is a type of sensor network that is especially well suited for environmental monitoring. The phrase the "sensor web" is also associated with a sensing system which heavily utilizes the World Wide Web. OGC's Sensor Web Enablement (SWE) framework defines a suite of web service interfaces and communication protocols abstracting from the heterogeneity of sensor (network) communication.
  13. Serverless computing or function as a service (FaaS) is a cloud computing execution model that allows users to build and run applications without considering servers.
  14. Service-oriented architecture is an approach used to create an architecture based upon the use of services. Services (such as RESTful Web services) carry out some small function, such as producing data, validating a customer, or providing simple analytical services.
    Services (such as RESTful Web services) carry out some small function, such as producing data, validating a customer, or providing simple analytical services.
  15. A service-oriented architecture is an approach used to create an architecture based upon the use of services. Services (such as RESTful Web services) carry out some small function, such as producing data, validating a customer, or providing simple analytical services.
    A service-oriented architecture is an architectural pattern in computer software design in which application components provide services to other components via a communications protocol, typically over a network. The principles of service-orientation are independent of any vendor, product or technology.
  16. Infrastructure such as datacenters that support cloud computing services, in order to sustain scalability.
    According to Gartner, “the cost for service providers to deliver infrastructure will plunge almost 40 percent by 2017.
  17. A service set identifier (SSID) is a sequence of characters that uniquely names a wireless local area network (WLAN). An SSID is sometimes referred to as a "network name."
    A service set identifier (SSID) is a sequence of characters that uniquely names a wireless local area network (WLAN). An SSID is sometimes referred to as a "network name." This name allows stations to connect to the desired network when multiple independent networks operate in the same physical area. Each set of wireless devices communicating directly with each other is called a basic service set (BSS). Several BSSs can be joined together to form one logical WLAN segment, referred to as an extended service set (ESS). A Service Set Identifer (SSID) is simply the 1-32 byte alphanumeric name given to each ESS. For example, a departmental WLAN (ESS) may consist of several access points (APs) and dozens of stations, all using the same SSID. Another organization in the same building may operate its own departmental WLAN, composed of APs and stations using a different SSID. The purpose of SSID is to help stations in department A find and connect to APs in department A, ignoring APs belonging to department B. Each AP advertises its presence several times per second by broadcasting beacon frames that carry the ESS name (SSID). Stations can discover APs by passively listening for beacons, or they can send probe frames to actively search for an AP with the desired SSID. Once the station locates an appropriately-named AP, it can send an associate request frame containing the desired SSID. The AP replies with an associate response frame, also containing SSID. Some frames are permitted to carry a null (zero length) SSID, called a broadcast SSID. For example, a station can send a probe request that carries a broadcast SSID; the AP must return its actual SSID in the probe response. Some APs can be configured to send a zero-length broadcast SSID in beacon frames instead of sending their actual SSID. However, it is not possible to keep an SSID value secret, because the actual SSID (ESS name) is carried in several frames.
  18. Shock Sensing is a MEMS concept referring to the detection of sudden impacts at a predetermined level. Typical applications include shut-off sensing, condition monitoring, and tap detection for data entry.
    Typical applications include shut-off sensing, condition monitoring, and tap detection for data entry.
  19. Simultaneous Localization and Mapping (SLAM) is the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of an agents location within it.
    A system originating from robotics and computer vision, SLAM is a procedure by which a computer scans an environment and constructs a digital map of the area. This has become a standard for anchoring augmented reality content in real world, physical spaces. This is the process Apple ARKit apps undertake to detect surfaces.
  20. Single-Carrier Radio Transmission Technology (1xRTT) is an operational mode for CDMA2000 wireless communications that specifies a single (1x) 1.25MHz channel for data transfer.
    1xRTT was the first version of CDMA2000, which is the International Telecommunication Union's (ITU) CDMA (Code-Division Multiple Access) implementation of the IMT-2000 standard. The theoretical network voice capacity of basic 1xRTT systems is approximately 144 kilobits per second (Kb/s) although in practice the highest attainable speed is more like 80 Kbps. Versions of CDMA2000 have been developed by Ericsson and Qualcomm.
  21. A field of study concerned with the understanding of the environment critical to decision-makers in complex, dynamic areas. Cyber situation awareness tools/techniques will need to be developed to enable IoT-based infrastructures to be monitored.
    Cyber situation awareness tools/techniques will need to be developed to enable IoT-based infrastructures to be monitored. Advances are required to enable operators to adapt the protection of the IoT during the lifecycle of the system and assist operators to take the most appropriate protective action during attacks.
  22. A smart card is a device that includes an embedded integrated circuit chip (ICC) that can be either a secure microcontroller or equivalent intelligence with internal memory or a memory chip alone.
    The card connects to a reader with direct physical contact or with a remote contactless radio frequency interface. Smart cards can be either contact or contactless smart card. Smart cards can provide personal identification, authentication, data storage, and application processing. Smart cards may provide strong security authentication for single sign-on (SSO) within large organizations. Smart cards have been advertised as suitable for personal identification tasks, because they are engineered to be tamper resistant. The chip usually implements some cryptographic algorithm. There are, however, several methods for recovering some of the algorithm's internal state. Differential power analysis involves measuring the precise time and electric current required for certain encryption or decryption operations. This can deduce the on-chip private key used by public key algorithms such as RSA. Some implementations of symmetric ciphers can be vulnerable to timing or power attacks as well. Smart cards can be physically disassembled by using acid, abrasives, solvents, or some other technique to obtain unrestricted access to the on-board microprocessor. Although such techniques may involve a risk of permanent damage to the chip, they permit much more detailed information (e.g., photomicrographs of encryption hardware) to be extracted.
  23. A computer protocol that facilitates, verifies and/or enforces the negotiation or performance of a contract. Smart contracts allow the performance of transactions without the need for 3rd parties.
    Smart contracts allow the performance of transactions without the need for 3rd parties. Many contracts can be partially, or fully self-executing and self-performing. A digital smart contract reduces the transaction costs of traditional contracting. Smart contracts gained widespread popularity with the rise of cryptocurrencies like Bitcoin, and platforms like Ethereum.
  24. Smart data is digital information that is formatted so it can be acted upon at the collection point before being sent to a downstream analytics platform. If Big Data is the oil of the future, then Smart Data is the fuel that drives the production of the future.
    Currently, data are just data. To turn them into information, they must be interpreted. This is the step from perception (recognizing) to cognition (understanding). Books, for example, are at first merely collections of letters. They only become knowledge when they are processed and interpreted in the brain. In the context of intelligent automation, the central focus is on the topics of data communication, process modeling, machine learning, autonomous self-configuration and process optimization.
  25. A smart factory is a learning factory, where people leverage data and technology constantly. Essentially, it is the implementation of Industry 4.0 technology.
    Central to the success of a smart factory is the technology that makes data collection possible, the sensors embedded in industrial equipment and, importantly, the industrial software to collect and analyse the data to ensure that the correct decisions are made to improve the factorys KPIs. 
  26. Smart Manufacturing aims to reduce manufacturing costs from the perspective of real-time energy management, energy productivity, and process energy efficiency.
    Initiatives will create a networked data driven process platform that combines innovative modeling and simulation and advanced sensing and control. Integrates efficiency intelligence in real-time across an entire production operation with primary emphasis on minimizing energy and material use; particularly relevant for energy-intensive manufacturing sectors.
  27. A smart meter is usually an electronic device that records consumption of electric energy in intervals of an hour or less. It communicates that information at least daily back to the utility for monitoring and billing.
    Smart meters enable two-way communication between the meter and the central system. Unlike home energy monitors, smart meters can gather data for remote reporting. Such an advanced metering infrastructure (AMI) differs from traditional automatic meter reading (AMR) in that it enables two-way communications with the meter.
  28. A smart sensor is a device that takes input from the physical environment and uses built-in compute resources to perform predefined functions upon detection of specific input and then process data before passing it on.
    A smart sensor is a device that takes input from the physical environment and uses built-in compute resources to perform predefined functions upon detection of specific input and then process data before passing it on. Smart sensors enable more accurate and automated collection of environmental data with less erroneous noise amongst the accurately recorded information. These devices are used for monitoring and control mechanisms in a wide variety of environments including smart grids, battlefield reconnaissance, exploration and a great number of science applications. The smart sensor is also a crucial and integral element in the Internet of Things (IoT), the increasingly prevalent environment in which almost anything imaginable can be outfitted with a unique identifier (UID) and the ability to transmit data over the Internet or a similar network. One implementation of smart sensors is as components of a wireless sensor and actuator network (WSAN) whose nodes can number in the thousands, each of which is connected with one or more other sensors and sensor hubs as well as individual actuators. Compute resources are typically provided by low-power mobile microprocessors. At a minimum, a smart sensor is made of a sensor, a microprocessor and communication technology of some kind. The compute resources must be an integral part of the physical design -- a sensor that just sends its data along for remote processing is not considered a smart sensor. A smart sensor may also include a number of other components besides the primary sensor. These components can include transducers, amplifiers, excitation control, analog filters and compensation. A smart sensor also incorporates software-defined elements that provide functions such as data conversion, digital processing and communication to external devices.
  29. Web-based socio-technical systems in which the human and technological elements play the role of participant machinery with respect to the mechanistic realization of system-level processes.
    They are part of the Industrie 4.0 vision. The underlying idea is that machines share their knowledge like in social networks – information about themselves as well as experiences and lessons learned from their processes. At the same time, social machines coordinate the information received and learn from the network too. Similarly to Facebook users, they independently obtain information from the Internet and connected social machine networks. Through swarm experience, they are aware of the best parameters for machining a particular material, for example, and they exchange them with befriended machines.
  30. The socialization of the Internet of Things is the integration of connected things into social life. For example, TV not only informs you that your favourite TV show is on in an hour but also lets you know which of your friends like the show, so it is possible to meet up and watch together.
    An example would be a TV that not only informs you that your favorite TV show is on in an hour, but also lets you know which of your friends like the show so it is possible to meet up and watch together.
  31. A subscription-based model where a monthly fee is charged for using software, rather than an upfront purchase. Software as a Service (SaaS) allows organizations to access business functionality at a cost typically less than paying for licensed applications.
    Software as a Service allows organizations to access business functionality at a cost typically less than paying for licensed applications since SaaS pricing is based on a monthly fee. Also, because the software is hosted remotely, users don't need to invest in additional hardware. Software as a Service removes the need for organizations to handle the installation, set-up and often daily upkeep and maintenance. Software as a Service may also be referred to as simply hosted applications.
  32. A Software Development Kit (SDK) consists of a group of development tools used to build an application for a specific platform. Typically, an SDK includes a visual screen builder, an editor, a compiler, a linker, and sometimes other facilities.
    SDK is a set of programs used by a computer programmer to write application programs. Typically, an SDK includes a visual screen builder, an editor, a compiler, a linker, and sometimes other facilities.
  33. An approach to network management that decouples control of information flow from the hardware and gives it to a software controller. Software-Defined Network (SDN) allows for fewer data to travel wirelessly, making it a strategy for IoT networks services by abstracting lower-level functionality.
    SDN allows for less data to travel wirelessly, making it a potential strategy for IoT networks as well as administrators to manage network services by abstracting lower level functionality, allowing network services to be specified without coupling these specifications to network interfaces. Software-defined networking (SDN) is an umbrella term encompassing several kinds of network technology aimed at making the network as agile and flexible as the virtualized server and storage infrastructure of the modern data center. The goal of SDN is to allow network engineers and administrators to respond quickly to changing business requirements. In a software-defined network, a network administrator can shape traffic from a centralized control console without having to touch individual switches, and can deliver services to wherever they are needed in the network, without regard to what specific devices a server or other device is connected to. The key technologies are functional separation, network virtualization and automation through programmability.
  34. Things in the workplace that replace or augment human labour. A “steel-collar workforce” is capable of tirelessly and efficiently performing repetitive tasks or monitoring.
    A “steel-collar workforce” is capable of tirelessly and efficiently performing repetitive tasks or monitoring. Playing off of the terms “blue collar” and “white collar,” the phrase was first coined in the early 1980s referring to a robotic threat to US manufacturing jobs.
  35. A model that organizes data elements and standardizes how the data elements relate to one another. This includes data contained in relational databases and spreadsheets.
  36. Supervisory Control and Data Acquisition (SCADA) is an industrial control system typically used for geographically dispersed assets, often scattered over large distances. It is for process control, the gathering of data in real time from remote locations in order to control equipment and conditions.
    SCADA is a category of software application program for process control, the gathering of data in real time from remote locations in order to control equipment and conditions. SCADA is used in power plants as well as in oil and gas refining, telecommunications, transportation, and water and waste control.
  37. System autonomy is an ability of an intelligent system to independently compose and select among different courses of action to accomplish goals based on its knowledge and understanding of the world, itself, and the situation.
    An example of IT automation in practice might be as simple as the integration of a form into a PDF that is automatically routed to the correct recipients, or as complex as automated provisioning of an offsite backup.
  38. A system configuration in systems engineering defines the computers, processes, and devices that compose the system and its boundary. A properly-configured system avoids resource-conflict problems and makes it easier to upgrade a system with new equipment.
    More general the system configuration is the specific definition of the elements that define and/or prescribe what a system is composed of. Alternatively the term system configuration can be used to relate to a model (declarative) for abstract generalized systems. In this sense the usage of the configuration information is not tailored to any specific usage, but stands alone as a data set. A properly-configured system avoids resource-conflict problems, and makes it easier to upgrade a system with new equipment.
  39. System environment is a context determining the setting and circumstances of all interactions and influences with the system of interest. The system environment includes developmental, technological, business, operational, organizational, economic, regulatory, ecological and social influences.
    The sum total of all surroundings of a living organism, including natural forces and other living things, which provide conditions for development and growth as well as of danger and damage. The environment of a system includes developmental, technological, business, operational, organizational, political, economic, legal, regulatory, ecological and social influences.
  40. A system on a chip (SoC) is an integrated circuit (IC) that integrates all components of a computer or other electronic system into a single chip. SOC's may contain digital, analog, mixed-signal, and often radio-frequency functions―all on a single chip substrate.
    SOC's may contain digital, analog, mixed-signal, and often radio-frequency functions―all on a single chip substrate. SoCs are very common in the mobile electronics market because of their low power consumption. SoCs are very common in the mobile electronics market as of their low power consumption. A typical application is in the area of embedded systems.
  41. A system on a module (SOM) is a board-level circuit that integrates a system function in a single module. It may integrate digital and analogue functions on a single board. A typical application is in the area of embedded systems.
    The organism being an equivalent of a computer, robot, automotive etc. It may integrate digital and analog functions on a single board. A typical application is in the area of embedded systems. Unlike a Single Board computer, SOM serves a special function like a SoC. The device integrated on the SOM typically requires a high level of interconnection for reasons such as speed, timing, bus-width etc., in a highly integrated module. There are benefits in building an SOM, as for SoC, one notable result is to reduce the cost of the base board or the main PCB. A major advantage of SOM is design-reuse and that it can be integrated into many embedded computer applications.
test test