Previous Issue
Volume 26, August
 
 
entropy-logo

Journal Browser

Journal Browser

Entropy, Volume 26, Issue 9 (September 2024) – 87 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
18 pages, 340 KiB  
Article
Some Results for Double Cyclic Codes over Fq+vFq+v2Fq
by Tenghui Deng and Jing Yang
Entropy 2024, 26(9), 803; https://doi.org/10.3390/e26090803 (registering DOI) - 20 Sep 2024
Abstract
Let Fq be a finite field with an odd characteristic. In this paper, we present a new result about double cyclic codes over a finite non-chain ring. Specifically, we study the double cyclic code over [...] Read more.
Let Fq be a finite field with an odd characteristic. In this paper, we present a new result about double cyclic codes over a finite non-chain ring. Specifically, we study the double cyclic code over Fq+vFq+v2Fq with v3=v, which is isomorphic to Fq×Fq×Fq. This study mainly involves generator polynomials and generator matrices. The generating polynomial of the dual code is also obtained. We show the relationship between the generating polynomials of the double cyclic codes and those of their dual codes. Finally, as an application of these results, we construct some optimal codes over F3. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
13 pages, 406 KiB  
Article
An Additively Optimal Interpreter for Approximating Kolmogorov Prefix Complexity
by Zoe Leyva-Acosta, Eduardo Acuña Yeomans and Francisco Hernandez-Quiroz
Entropy 2024, 26(9), 802; https://doi.org/10.3390/e26090802 (registering DOI) - 20 Sep 2024
Viewed by 95
Abstract
We study practical approximations of Kolmogorov prefix complexity (K) using IMP2, a high-level programming language. Our focus is on investigating the optimality of the interpreter for this language as the reference machine for the Coding Theorem Method (CTM). This method is [...] Read more.
We study practical approximations of Kolmogorov prefix complexity (K) using IMP2, a high-level programming language. Our focus is on investigating the optimality of the interpreter for this language as the reference machine for the Coding Theorem Method (CTM). This method is designed to address applications of algorithmic complexity that differ from the popular traditional lossless compression approach based on the principles of algorithmic probability. The chosen model of computation is proven to be suitable for this task, and a comparison to other models and methods is conducted. Our findings show that CTM approximations using our model do not always correlate with the results from lower-level models of computation. This suggests that some models may require a larger program space to converge to Levin’s universal distribution. Furthermore, we compare the CTM with an upper bound on Kolmogorov complexity and find a strong correlation, supporting the CTM’s validity as an approximation method with finer-grade resolution of K. Full article
Show Figures

Figure 1

14 pages, 342 KiB  
Article
Assessing Variable Importance for Best Subset Selection
by Jacob Seedorff and Joseph E. Cavanaugh
Entropy 2024, 26(9), 801; https://doi.org/10.3390/e26090801 - 19 Sep 2024
Viewed by 155
Abstract
One of the primary issues that arises in statistical modeling pertains to the assessment of the relative importance of each variable in the model. A variety of techniques have been proposed to quantify variable importance for regression models. However, in the context of [...] Read more.
One of the primary issues that arises in statistical modeling pertains to the assessment of the relative importance of each variable in the model. A variety of techniques have been proposed to quantify variable importance for regression models. However, in the context of best subset selection, fewer satisfactory methods are available. With this motivation, we here develop a variable importance measure expressly for this setting. We investigate and illustrate the properties of this measure, introduce algorithms for the efficient computation of its values, and propose a procedure for calculating p-values based on its sampling distributions. We present multiple simulation studies to examine the properties of the proposed methods, along with an application to demonstrate their practical utility. Full article
Show Figures

Figure 1

9 pages, 637 KiB  
Article
Golf Club Selection with AI-Based Game Planning
by Mehdi Khazaeli and Leili Javadpour
Entropy 2024, 26(9), 800; https://doi.org/10.3390/e26090800 - 19 Sep 2024
Viewed by 221
Abstract
In the dynamic realm of golf, where every swing can make the difference between victory and defeat, the strategic selection of golf clubs has become a crucial factor in determining the outcome of a game. Advancements in artificial intelligence have opened new avenues [...] Read more.
In the dynamic realm of golf, where every swing can make the difference between victory and defeat, the strategic selection of golf clubs has become a crucial factor in determining the outcome of a game. Advancements in artificial intelligence have opened new avenues for enhancing the decision-making process, empowering golfers to achieve optimal performance on the course. In this paper, we introduce an AI-based game planning system that assists players in selecting the best club for a given scenario. The system considers factors such as distance, terrain, wind strength and direction, and quality of lie. A rule-based model provides the four best club options based on the player’s maximum shot data for each club. The player picks a club, shot, and target and a probabilistic classification model identifies whether the shot represents a birdie opportunity, par zone, bogey zone, or worse. The results of our model show that taking into account factors such as terrain and atmospheric features increases the likelihood of a better shot outcome. Full article
(This article belongs to the Special Issue Learning from Games and Contests)
Show Figures

Figure 1

12 pages, 10278 KiB  
Article
Enhanced Magnetocaloric Properties of the (MnNi)0.6Si0.62(FeCo)0.4Ge0.38 High-Entropy Alloy Obtained by Co Substitution
by Zhigang Zheng, Pengyan Huang, Xinglin Chen, Hongyu Wang, Shan Da, Gang Wang, Zhaoguo Qiu and Dechang Zeng
Entropy 2024, 26(9), 799; https://doi.org/10.3390/e26090799 - 19 Sep 2024
Viewed by 238
Abstract
In order to improve the magnetocaloric properties of MnNiSi-based alloys, a new type of high-entropy magnetocaloric alloy was constructed. In this work, Mn0.6Ni1−xSi0.62Fe0.4CoxGe0.38 (x = 0.4, 0.45, and 0.5) are [...] Read more.
In order to improve the magnetocaloric properties of MnNiSi-based alloys, a new type of high-entropy magnetocaloric alloy was constructed. In this work, Mn0.6Ni1−xSi0.62Fe0.4CoxGe0.38 (x = 0.4, 0.45, and 0.5) are found to exhibit magnetostructural first-order phase transitions from high-temperature Ni2In-type phases to low-temperature TiNiSi-type phases so that the alloys can achieve giant magnetocaloric effects. We investigate why chexagonal/ahexagonal (chexa/ahexa) gradually increases upon Co substitution, while phase transition temperature (Ttr) and isothermal magnetic entropy change (ΔSM) tend to gradually decrease. In particular, the x = 0.4 alloy with remarkable magnetocaloric properties is obtained by tuning Co/Ni, which shows a giant entropy change of 48.5 J∙kg−1K−1 at 309 K for 5 T and an adiabatic temperature change (ΔTad) of 8.6 K at 306.5 K. Moreover, the x = 0.55 HEA shows great hardness and compressive strength with values of 552 HV2 and 267 MPa, respectively, indicating that the mechanical properties undergo an effective enhancement. The large ΔSM and ΔTad may enable the MnNiSi-based HEAs to become a potential commercialized magnetocaloric material. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

19 pages, 1097 KiB  
Article
Nonparametric Expectile Shortfall Regression for Complex Functional Structure
by Mohammed B. Alamari, Fatimah A. Almulhim, Zoulikha Kaid and Ali Laksaci
Entropy 2024, 26(9), 798; https://doi.org/10.3390/e26090798 - 18 Sep 2024
Viewed by 195
Abstract
This paper treats the problem of risk management through a new conditional expected shortfall function. The new risk metric is defined by the expectile as the shortfall threshold. A nonparametric estimator based on the Nadaraya–Watson approach is constructed. The asymptotic property of the [...] Read more.
This paper treats the problem of risk management through a new conditional expected shortfall function. The new risk metric is defined by the expectile as the shortfall threshold. A nonparametric estimator based on the Nadaraya–Watson approach is constructed. The asymptotic property of the constructed estimator is established using a functional time-series structure. We adopt some concentration inequalities to fit this complex structure and to precisely determine the convergence rate of the estimator. The easy implantation of the new risk metric is shown through real and simulated data. Specifically, we show the feasibility of the new model as a risk tool by examining its sensitivity to the fluctuation in financial time-series data. Finally, a comparative study between the new shortfall and the standard one is conducted using real data. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

20 pages, 476 KiB  
Review
Forced Friends: Why the Free Energy Principle Is Not the New Hamilton’s Principle
by Bartosz Michał Radomski and Krzysztof Dołęga
Entropy 2024, 26(9), 797; https://doi.org/10.3390/e26090797 - 18 Sep 2024
Viewed by 339
Abstract
The claim that the free energy principle is somehow related to Hamilton’s principle in statistical mechanics is ubiquitous throughout the subject literature. However, the exact nature of this relationship remains unclear. According to some sources, the free energy principle is merely similar to [...] Read more.
The claim that the free energy principle is somehow related to Hamilton’s principle in statistical mechanics is ubiquitous throughout the subject literature. However, the exact nature of this relationship remains unclear. According to some sources, the free energy principle is merely similar to Hamilton’s principle of stationary action; others claim that it is either analogous or equivalent to it, while yet another part of the literature espouses the claim that it is a version of Hamilton’s principle. In this article, we aim to clarify the nature of the relationship between the two principles by investigating the two most likely interpretations of the claims that can be found in the subject literature. According to the strong interpretation, the two principles are equivalent and apply to the same subset of physical phenomena; according to the weak interpretation, the two principles are merely analogous to each other by virtue of their similar formal structures. As we show, adopting the stronger reading would lead to a dilemma that is untenable for the proponents of the free energy principle, thus supporting the adoption of the weaker reading for the relationship between the two constructs. Full article
Show Figures

Figure 1

21 pages, 3199 KiB  
Article
Developing an Early Warning System for Financial Networks: An Explainable Machine Learning Approach
by Daren Purnell, Jr., Amir Etemadi and John Kamp
Entropy 2024, 26(9), 796; https://doi.org/10.3390/e26090796 - 17 Sep 2024
Viewed by 585
Abstract
Identifying the influential variables that provide early warning of financial network instability is challenging, in part due to the complexity of the system, uncertainty of a failure, and nonlinear, time-varying relationships between network participants. In this study, we introduce a novel methodology to [...] Read more.
Identifying the influential variables that provide early warning of financial network instability is challenging, in part due to the complexity of the system, uncertainty of a failure, and nonlinear, time-varying relationships between network participants. In this study, we introduce a novel methodology to select variables that, from a data-driven and statistical modeling perspective, represent these relationships and may indicate that the financial network is trending toward instability. We introduce a novel variable selection methodology that leverages Shapley values and modified Borda counts, in combination with statistical and machine learning methods, to create an explainable linear model to predict relationship value weights between network participants. We validate this new approach with data collected from the March 2023 Silicon Valley Bank Failure. The models produced using this novel method successfully identified the instability trend using only 14 input variables out of a possible 3160. The use of parsimonious linear models developed by this method has the potential to identify key financial stability indicators while also increasing the transparency of this complex system. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 1684 KiB  
Article
Robust Optimization Research of Cyber–Physical Power System Considering Wind Power Uncertainty and Coupled Relationship
by Jiuling Dong, Zilong Song, Yuanshuo Zheng, Jingtang Luo, Min Zhang, Xiaolong Yang and Hongbing Ma
Entropy 2024, 26(9), 795; https://doi.org/10.3390/e26090795 - 17 Sep 2024
Viewed by 332
Abstract
To mitigate the impact of wind power uncertainty and power–communication coupling on the robustness of a new power system, a bi-level mixed-integer robust optimization strategy is proposed. Firstly, a coupled network model is constructed based on complex network theory, taking into account the [...] Read more.
To mitigate the impact of wind power uncertainty and power–communication coupling on the robustness of a new power system, a bi-level mixed-integer robust optimization strategy is proposed. Firstly, a coupled network model is constructed based on complex network theory, taking into account the coupled relationship of energy supply and control dependencies between the power and communication networks. Next, a bi-level mixed-integer robust optimization model is developed to improve power system resilience, incorporating constraints related to the coupling strength, electrical characteristics, and traffic characteristics of the information network. The upper-level model seeks to minimize load shedding by optimizing DC power flow using fuzzy chance constraints, thereby reducing the risk of power imbalances caused by random fluctuations in wind power generation. Furthermore, the deterministic power balance constraints are relaxed into inequality constraints that account for wind power forecasting errors through fuzzy variables. The lower-level model focuses on minimizing traffic load shedding by establishing a topology–function-constrained information network traffic model based on the maximum flow principle in graph theory, thereby improving the efficiency of network flow transmission. Finally, a modified IEEE 39-bus test system with intermittent wind power is used as a case study. Random attack simulations demonstrate that, under the highest link failure rate and wind power penetration, Model 2 outperforms Model 1 by reducing the load loss ratio by 23.6% and improving the node survival ratio by 5.3%. Full article
(This article belongs to the Special Issue Robustness and Resilience of Complex Networks)
Show Figures

Figure 1

30 pages, 4353 KiB  
Review
Is Seeing Believing? A Practitioner’s Perspective on High-Dimensional Statistical Inference in Cancer Genomics Studies
by Kun Fan, Srijana Subedi, Gongshun Yang, Xi Lu, Jie Ren and Cen Wu
Entropy 2024, 26(9), 794; https://doi.org/10.3390/e26090794 - 16 Sep 2024
Viewed by 645
Abstract
Variable selection methods have been extensively developed for and applied to cancer genomics data to identify important omics features associated with complex disease traits, including cancer outcomes. However, the reliability and reproducibility of the findings are in question if valid inferential procedures are [...] Read more.
Variable selection methods have been extensively developed for and applied to cancer genomics data to identify important omics features associated with complex disease traits, including cancer outcomes. However, the reliability and reproducibility of the findings are in question if valid inferential procedures are not available to quantify the uncertainty of the findings. In this article, we provide a gentle but systematic review of high-dimensional frequentist and Bayesian inferential tools under sparse models which can yield uncertainty quantification measures, including confidence (or Bayesian credible) intervals, p values and false discovery rates (FDR). Connections in high-dimensional inferences between the two realms have been fully exploited under the “unpenalized loss function + penalty term” formulation for regularization methods and the “likelihood function × shrinkage prior” framework for regularized Bayesian analysis. In particular, we advocate for robust Bayesian variable selection in cancer genomics studies due to its ability to accommodate disease heterogeneity in the form of heavy-tailed errors and structured sparsity while providing valid statistical inference. The numerical results show that robust Bayesian analysis incorporating exact sparsity has yielded not only superior estimation and identification results but also valid Bayesian credible intervals under nominal coverage probabilities compared with alternative methods, especially in the presence of heavy-tailed model errors and outliers. Full article
(This article belongs to the Special Issue Bayesian Learning and Its Applications in Genomics)
Show Figures

Figure 1

16 pages, 755 KiB  
Article
An MLWE-Based Cut-and-Choose Oblivious Transfer Protocol
by Yongli Tang, Menghao Guo, Yachao Huo, Zongqu Zhao, Jinxia Yu and Baodong Qin
Entropy 2024, 26(9), 793; https://doi.org/10.3390/e26090793 - 16 Sep 2024
Viewed by 238
Abstract
The existing lattice-based cut-and-choose oblivious transfer protocol is constructed based on the learning-with-errors (LWE) problem, which generally has the problem of inefficiency. An efficient cut-and-choose oblivious transfer protocol is proposed based on the difficult module-learning-with-errors (MLWE) problem. Compression and decompression techniques are introduced [...] Read more.
The existing lattice-based cut-and-choose oblivious transfer protocol is constructed based on the learning-with-errors (LWE) problem, which generally has the problem of inefficiency. An efficient cut-and-choose oblivious transfer protocol is proposed based on the difficult module-learning-with-errors (MLWE) problem. Compression and decompression techniques are introduced in the LWE-based dual-mode encryption system to improve it to an MLWE-based dual-mode encryption framework, which is applied to the protocol as an intermediate scheme. Subsequently, the security and efficiency of the protocol are analysed, and the security of the protocol can be reduced to the shortest independent vector problem (SIVP) on the lattice, which is resistant to quantum attacks. Since the whole protocol relies on the polynomial ring of elements to perform operations, the efficiency of polynomial modulo multiplication can be improved by using fast Fourier transform (FFT). Finally, this paper compares the protocol with an LWE-based protocol in terms of computational and communication complexities. The analysis results show that the protocol reduces the computation and communication overheads by at least a factor of n while maintaining the optimal number of communication rounds under malicious adversary attacks. Full article
(This article belongs to the Special Issue Information-Theoretic Cryptography and Security)
Show Figures

Figure 1

14 pages, 634 KiB  
Article
Debiasing the Conversion Rate Prediction Model in the Presence of Delayed Implicit Feedback
by Taojun Hu and Xiao-Hua Zhou
Entropy 2024, 26(9), 792; https://doi.org/10.3390/e26090792 - 15 Sep 2024
Viewed by 431
Abstract
The recommender system (RS) has been widely adopted in many applications, including online advertisements. Predicting the conversion rate (CVR) can help in evaluating the effects of advertisements on users and capturing users’ features, playing an important role in RS. In real-world scenarios, implicit [...] Read more.
The recommender system (RS) has been widely adopted in many applications, including online advertisements. Predicting the conversion rate (CVR) can help in evaluating the effects of advertisements on users and capturing users’ features, playing an important role in RS. In real-world scenarios, implicit rather than explicit feedback data are more abundant. Thus, directly training the RS with collected data may lead to suboptimal performance due to selection bias inherited from the nature of implicit feedback. Methods such as reweighting have been proposed to tackle selection bias; however, these methods omit delayed feedback, which often occurs due to limited observation times. We propose a novel likelihood approach combining the assumed parametric model for delayed feedback and the reweighting method to address selection bias. Specifically, the proposed methods minimize the likelihood-based loss using the multi-task learning method. The proposed methods are evaluated on the real-world Coat and Yahoo datasets. The proposed methods improve the AUC by 5.7% on Coat and 3.7% on Yahoo compared with the best baseline models. The proposed methods successfully debias the CVR prediction model in the presence of delayed implicit feedback. Full article
(This article belongs to the Special Issue Causal Inference in Recommender Systems)
Show Figures

Figure 1

13 pages, 1724 KiB  
Article
Exergy Flow as a Unifying Physical Quantity in Applying Dissipative Lagrangian Fluid Mechanics to Integrated Energy Systems
by Ke Xu, Yan Qi, Changlong Sun, Dengxin Ai, Jiaojiao Wang, Wenxue He, Fan Yang and Hechen Ren
Entropy 2024, 26(9), 791; https://doi.org/10.3390/e26090791 - 14 Sep 2024
Viewed by 284
Abstract
Highly integrated energy systems are on the rise due to increasing global demand. To capture the underlying physics of such interdisciplinary systems, we need a modern framework that unifies all forms of energy. Here, we apply modified Lagrangian mechanics to the description of [...] Read more.
Highly integrated energy systems are on the rise due to increasing global demand. To capture the underlying physics of such interdisciplinary systems, we need a modern framework that unifies all forms of energy. Here, we apply modified Lagrangian mechanics to the description of multi-energy systems. Based on the minimum entropy production principle, we revisit fluid mechanics in the presence of both mechanical and thermal dissipations and propose using exergy flow as the unifying Lagrangian across different forms of energy. We illustrate our theoretical framework by modeling a one-dimensional system with coupled electricity and heat. We map the exergy loss rate in real space and obtain the total exergy changes. Under steady-state conditions, our theory agrees with the traditional formula but incorporates more physical considerations such as viscous dissipation. The integral form of our theory also allows us to go beyond steady-state calculations and visualize the local, time-dependent exergy flow density everywhere in the system. Expandable to a wide range of applications, our theoretical framework provides the basis for developing versatile models in integrated energy systems. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 2082 KiB  
Review
The Many Roles of Precision in Action
by Jakub Limanowski, Rick A. Adams, James Kilner and Thomas Parr
Entropy 2024, 26(9), 790; https://doi.org/10.3390/e26090790 - 14 Sep 2024
Viewed by 524
Abstract
Active inference describes (Bayes-optimal) behaviour as being motivated by the minimisation of surprise of one’s sensory observations, through the optimisation of a generative model (of the hidden causes of one’s sensory data) in the brain. One of active inference’s key appeals is its [...] Read more.
Active inference describes (Bayes-optimal) behaviour as being motivated by the minimisation of surprise of one’s sensory observations, through the optimisation of a generative model (of the hidden causes of one’s sensory data) in the brain. One of active inference’s key appeals is its conceptualisation of precision as biasing neuronal communication and, thus, inference within generative models. The importance of precision in perceptual inference is evident—many studies have demonstrated the importance of ensuring precision estimates are correct for normal (healthy) sensation and perception. Here, we highlight the many roles precision plays in action, i.e., the key processes that rely on adequate estimates of precision, from decision making and action planning to the initiation and control of muscle movement itself. Thereby, we focus on the recent development of hierarchical, “mixed” models—generative models spanning multiple levels of discrete and continuous inference. These kinds of models open up new perspectives on the unified description of hierarchical computation, and its implementation, in action. Here, we highlight how these models reflect the many roles of precision in action—from planning to execution—and the associated pathologies if precision estimation goes wrong. We also discuss the potential biological implementation of the associated message passing, focusing on the role of neuromodulatory systems in mediating different kinds of precision. Full article
Show Figures

Figure 1

10 pages, 1762 KiB  
Article
Evanescent Electron Wave-Spin
by Ju Gao and Fang Shen
Entropy 2024, 26(9), 789; https://doi.org/10.3390/e26090789 - 14 Sep 2024
Viewed by 213
Abstract
This study demonstrates the existence of an evanescent electron wave outside both finite and infinite quantum wells by solving the Dirac equation and ensuring the continuity of the spinor wavefunction at the boundaries. We show that this evanescent wave shares the spin characteristics [...] Read more.
This study demonstrates the existence of an evanescent electron wave outside both finite and infinite quantum wells by solving the Dirac equation and ensuring the continuity of the spinor wavefunction at the boundaries. We show that this evanescent wave shares the spin characteristics of the wave confined within the well, as indicated by analytical expressions for the current density across all regions. Our findings suggest that the electron cannot be confined to a mathematical singularity and that quantum information, or quantum entropy, can leak through any quantum confinement. These results emphasize that the electron wave, fully characterized by Lorentz-invariant charge and current densities, should be considered the true and sole entity of the electron. Full article
(This article belongs to the Section Quantum Information)
Show Figures

Figure 1

17 pages, 388 KiB  
Article
On the Analysis of Wealth Distribution in the Context of Infectious Diseases
by Tingting Zhang, Shaoyong Lai and Minfang Zhao
Entropy 2024, 26(9), 788; https://doi.org/10.3390/e26090788 - 14 Sep 2024
Viewed by 286
Abstract
A mathematical model is established to investigate the economic effects of infectious diseases. The distribution of wealth among two types of agents in the context of the epidemic is discussed. Using the method of statistical mechanics, the evolution of the entropy weak solutions [...] Read more.
A mathematical model is established to investigate the economic effects of infectious diseases. The distribution of wealth among two types of agents in the context of the epidemic is discussed. Using the method of statistical mechanics, the evolution of the entropy weak solutions for the model of the susceptible and the infectious involving wealth density functions is analyzed. We assume that as time tends to infinity, the wealth density function of the infectious is linearly related to the wealth density function of the susceptible individuals. Our results indicate that the spreading of disease significantly affects the wealth distribution. When time tends to infinity, the total wealth density function behaves as an inverse gamma distribution. Utilizing numerical experiments, the distribution of wealth under the epidemic phenomenon and the situation of wealth inequality among agents are discussed. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

21 pages, 4992 KiB  
Article
Enhancing Security of Telemedicine Data: A Multi-Scroll Chaotic System for ECG Signal Encryption and RF Transmission
by José Ricardo Cárdenas-Valdez, Ramón Ramírez-Villalobos, Catherine Ramirez-Ubieta and Everardo Inzunza-Gonzalez
Entropy 2024, 26(9), 787; https://doi.org/10.3390/e26090787 - 14 Sep 2024
Viewed by 419
Abstract
Protecting sensitive patient data, such as electrocardiogram (ECG) signals, during RF wireless transmission is essential due to the increasing demand for secure telemedicine communications. This paper presents an innovative chaotic-based encryption system designed to enhance the security and integrity of telemedicine data transmission. [...] Read more.
Protecting sensitive patient data, such as electrocardiogram (ECG) signals, during RF wireless transmission is essential due to the increasing demand for secure telemedicine communications. This paper presents an innovative chaotic-based encryption system designed to enhance the security and integrity of telemedicine data transmission. The proposed system utilizes a multi-scroll chaotic system for ECG signal encryption based on master–slave synchronization. The ECG signal is encrypted by a master system and securely transmitted to a remote location, where it is decrypted by a slave system using an extended state observer. Synchronization between the master and slave is achieved through the Lyapunov criteria, which ensures system stability. The system also supports Orthogonal Frequency Division Multiplexing (OFDM) and adaptive n-quadrature amplitude modulation (n-QAM) schemes to optimize signal discretization. Experimental validations with a custom transceiver scheme confirmed the system’s effectiveness in preventing channel overlap during 2.5 GHz transmissions. Additionally, a commercial RF Power Amplifier (RF-PA) for LTE applications and a development board were integrated to monitor transmission quality. The proposed encryption system ensures robust and efficient RF transmission of ECG data, addressing critical challenges in the wireless communication of sensitive medical information. This approach demonstrates the potential for broader applications in modern telemedicine environments, providing a reliable and efficient solution for the secure transmission of healthcare data. Full article
Show Figures

Figure 1

19 pages, 353 KiB  
Article
Relative Belief Inferences from Decision Theory
by Michael Evans and Gun Ho Jang
Entropy 2024, 26(9), 786; https://doi.org/10.3390/e26090786 - 14 Sep 2024
Viewed by 206
Abstract
Relative belief inferences are shown to arise as Bayes rules or limiting Bayes rules. These inferences are invariant under reparameterizations and possess a number of optimal properties. In particular, relative belief inferences are based on a direct measure of statistical evidence. Full article
(This article belongs to the Special Issue Bayesianism)
13 pages, 415 KiB  
Article
Sampled-Data Exponential Synchronization of Complex Dynamical Networks with Saturating Actuators
by Runan Guo and Wenshun Lv
Entropy 2024, 26(9), 785; https://doi.org/10.3390/e26090785 - 14 Sep 2024
Viewed by 303
Abstract
This paper investigates the problem of exponential synchronization control for complex dynamical systems (CDNs) with input saturation. Considering the effects of transmission delay, a memory sampled-data controller is designed. A modified two-sided looped functional is constructed that takes into account the entire sampling [...] Read more.
This paper investigates the problem of exponential synchronization control for complex dynamical systems (CDNs) with input saturation. Considering the effects of transmission delay, a memory sampled-data controller is designed. A modified two-sided looped functional is constructed that takes into account the entire sampling period, which includes both current state information and delayed state information. This functional only needs to be positive definite at the sampling instants. Sufficient criteria and the controller design method are provided to ensure the exponential synchronization of CDNs with input saturation under the influence of transmission delay, as well as the estimation of the basin of attraction. Additionally, an optimization algorithm for enlarging the region of attraction is proposed. Finally, a numerical example is presented to verify the effectiveness of the conclusion. Full article
(This article belongs to the Section Complexity)
Show Figures

Figure 1

15 pages, 2307 KiB  
Article
Information-Theoretic Modeling of Categorical Spatiotemporal GIS Data
by David Percy and Martin Zwick
Entropy 2024, 26(9), 784; https://doi.org/10.3390/e26090784 - 13 Sep 2024
Viewed by 268
Abstract
An information-theoretic data mining method is employed to analyze categorical spatiotemporal Geographic Information System land use data. Reconstructability Analysis (RA) is a maximum-entropy-based data modeling methodology that works exclusively with discrete data such as those in the National Land Cover Database (NLCD). The [...] Read more.
An information-theoretic data mining method is employed to analyze categorical spatiotemporal Geographic Information System land use data. Reconstructability Analysis (RA) is a maximum-entropy-based data modeling methodology that works exclusively with discrete data such as those in the National Land Cover Database (NLCD). The NLCD is organized into a spatial (raster) grid and data are available in a consistent format for every five years from 2001 to 2021. An NLCD tool reports how much change occurred for each category of land use; for the study area examined, the most dynamic class is Evergreen Forest (EFO), so the presence or absence of EFO in 2021 was chosen as the dependent variable that our data modeling attempts to predict. RA predicts the outcome with approximately 80% accuracy using a sparse set of cells from a spacetime data cube consisting of neighboring lagged-time cells. When the predicting cells are all Shrubs and Grasses, there is a high probability for a 2021 state of EFO, while when the predicting cells are all EFO, there is a high probability that the 2021 state will not be EFO. These findings are interpreted as detecting forest clear-cut cycles that show up in the data and explain why this class is so dynamic. This study introduces a new approach to analyzing GIS categorical data and expands the range of applications that this entropy-based methodology can successfully model. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

12 pages, 917 KiB  
Article
Time Sequence Deep Learning Model for Ubiquitous Tabular Data with Unique 3D Tensors Manipulation
by Adaleta Gicic, Dženana Đonko and Abdulhamit Subasi
Entropy 2024, 26(9), 783; https://doi.org/10.3390/e26090783 - 12 Sep 2024
Viewed by 318
Abstract
Although deep learning (DL) algorithms have been proved to be effective in diverse research domains, their application in developing models for tabular data remains limited. Models trained on tabular data demonstrate higher efficacy using traditional machine learning models than DL models, which are [...] Read more.
Although deep learning (DL) algorithms have been proved to be effective in diverse research domains, their application in developing models for tabular data remains limited. Models trained on tabular data demonstrate higher efficacy using traditional machine learning models than DL models, which are largely attributed to the size and structure of tabular datasets and the specific application contexts in which they are utilized. Thus, the primary objective of this paper is to propose a method to use the supremacy of Stacked Bidirectional LSTM (Long Short-Term Memory) deep learning algorithms in pattern discovery incorporating tabular data with customized 3D tensor modeling in feeding neural networks. Our findings are empirically validated using six diverse, publicly available datasets each varying in size and learning objectives. This paper proves that the proposed model based on time-sequence DL algorithms, which were generally described as inadequate when dealing with tabular data, yields satisfactory results and competes effectively with other algorithms specifically designed for tabular data. An additional benefit of this approach is its ability to preserve simplicity while ensuring fast model training also with large datasets. Even with extremely small datasets, models can be applied to achieve exceptional predictive results and fully utilize their capacity. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

17 pages, 3321 KiB  
Article
Sensitivity Analysis of Excited-State Population in Plasma Based on Relative Entropy
by Yosuke Shimada and Hiroshi Akatsuka
Entropy 2024, 26(9), 782; https://doi.org/10.3390/e26090782 - 12 Sep 2024
Viewed by 363
Abstract
A highly versatile evaluation method is proposed for transient plasmas based on statistical physics. It would be beneficial in various industrial sectors, including semiconductors and automobiles. Our research focused on low-energy plasmas in laboratory settings, and they were assessed via our proposed method, [...] Read more.
A highly versatile evaluation method is proposed for transient plasmas based on statistical physics. It would be beneficial in various industrial sectors, including semiconductors and automobiles. Our research focused on low-energy plasmas in laboratory settings, and they were assessed via our proposed method, which incorporates relative entropy and fractional Brownian motion, based on a revised collisional–radiative model. By introducing an indicator to evaluate how far a system is from its steady state, both the trend of entropy and the radiative process’ contribution to the lifetime of excited states were considered. The high statistical weight of some excited states may act as a bottleneck in the plasma’s energy relaxation throughout the system to a steady state. By deepening our understanding of how energy flows through plasmas, we anticipate potential contributions to resolving global environmental issues and fostering technological innovation in plasma-related industrial fields. Full article
Show Figures

Figure 1

16 pages, 446 KiB  
Article
Design of Low-Latency Layered Normalized Minimum Sum Low-Density Parity-Check Decoding Based on Entropy Feature for NAND Flash-Memory Channel
by Yingge Li and Haihua Hu
Entropy 2024, 26(9), 781; https://doi.org/10.3390/e26090781 - 12 Sep 2024
Viewed by 265
Abstract
As high-speed big-data communications impose new requirements on storage latency, low-density parity-check (LDPC) codes have become a widely used technology in flash-memory channels. However, the iterative LDPC decoding algorithm faces a high decoding latency problem due to its mechanism based on iterative message [...] Read more.
As high-speed big-data communications impose new requirements on storage latency, low-density parity-check (LDPC) codes have become a widely used technology in flash-memory channels. However, the iterative LDPC decoding algorithm faces a high decoding latency problem due to its mechanism based on iterative message transmission. Motivated by the unbalanced bit reliability of codeword, this paper proposes two technologies, i.e., serial entropy feature-based layered normalized min-sum (S-EFB-LNMS) decoding and parallel entropy feature-based layered normalized min-sum (P-EFB-LNMS) decoding. First, we construct an entropy feature vector that reflects the real-time bit reliability of the codeword. Then, the reliability of the output information of the layered processing unit (LPU) is evaluated by analyzing the similarity between the check matrix and the entropy feature vector. Based on this evaluation, we can dynamically allocate and schedule LPUs during the decoding iteration process, thereby optimizing the entire decoding process. Experimental results show that these techniques can significantly reduce decoding latency. Full article
Show Figures

Figure 1

13 pages, 2442 KiB  
Article
Critical Assessment of Information Back-Flow in Measurement-Free Teleportation
by Hannah McAleese and Mauro Paternostro
Entropy 2024, 26(9), 780; https://doi.org/10.3390/e26090780 - 11 Sep 2024
Viewed by 234
Abstract
We assess a scheme for measurement-free quantum teleportation from the perspective of the resources underpinning its performance. In particular, we focus on claims recently made about the crucial role played by the degree of non-Markovianity of the dynamics of the information carrier whose [...] Read more.
We assess a scheme for measurement-free quantum teleportation from the perspective of the resources underpinning its performance. In particular, we focus on claims recently made about the crucial role played by the degree of non-Markovianity of the dynamics of the information carrier whose state we aim to teleport. We prove that any link between the efficiency of teleportation and the back-flow of information depends fundamentally on the way the various operations entailed by the measurement-free teleportation protocol are implemented while—in general—no claim of causal link can be made. Our result reinforces the need for the explicit assessment of the underlying physical platform when assessing the performance and resources for a given quantum protocol and the need for a rigorous quantum resource theory of non-Markovianity. Full article
(This article belongs to the Special Issue Simulation of Open Quantum Systems)
Show Figures

Figure 1

29 pages, 710 KiB  
Perspective
Information Thermodynamics: From Physics to Neuroscience
by Jan Karbowski
Entropy 2024, 26(9), 779; https://doi.org/10.3390/e26090779 - 11 Sep 2024
Viewed by 337
Abstract
This paper provides a perspective on applying the concepts of information thermodynamics, developed recently in non-equilibrium statistical physics, to problems in theoretical neuroscience. Historically, information and energy in neuroscience have been treated separately, in contrast to physics approaches, where the relationship of entropy [...] Read more.
This paper provides a perspective on applying the concepts of information thermodynamics, developed recently in non-equilibrium statistical physics, to problems in theoretical neuroscience. Historically, information and energy in neuroscience have been treated separately, in contrast to physics approaches, where the relationship of entropy production with heat is a central idea. It is argued here that also in neural systems, information and energy can be considered within the same theoretical framework. Starting from basic ideas of thermodynamics and information theory on a classic Brownian particle, it is shown how noisy neural networks can infer its probabilistic motion. The decoding of the particle motion by neurons is performed with some accuracy, and it has some energy cost, and both can be determined using information thermodynamics. In a similar fashion, we also discuss how neural networks in the brain can learn the particle velocity and maintain that information in the weights of plastic synapses from a physical point of view. Generally, it is shown how the framework of stochastic and information thermodynamics can be used practically to study neural inference, learning, and information storing. Full article
(This article belongs to the Special Issue Entropy and Information in Biological Systems)
Show Figures

Figure 1

11 pages, 407 KiB  
Article
Levy Sooty Tern Optimization Algorithm Builds DNA Storage Coding Sets for Random Access
by Jianxia Zhang
Entropy 2024, 26(9), 778; https://doi.org/10.3390/e26090778 - 11 Sep 2024
Viewed by 388
Abstract
DNA molecules, as a storage medium, possess unique advantages. Not only does DNA storage exhibit significantly higher storage density compared to electromagnetic storage media, but it also features low energy consumption and extremely long storage times. However, the integration of DNA storage into [...] Read more.
DNA molecules, as a storage medium, possess unique advantages. Not only does DNA storage exhibit significantly higher storage density compared to electromagnetic storage media, but it also features low energy consumption and extremely long storage times. However, the integration of DNA storage into daily life remains distant due to challenges such as low storage density, high latency, and inevitable errors during the storage process. Therefore, this paper proposes constructing a DNA storage coding set based on the Levy Sooty Tern Optimization Algorithm (LSTOA) to achieve an efficient random-access DNA storage system. Firstly, addressing the slow iteration speed and susceptibility to local optima of the Sooty Tern Optimization Algorithm (STOA), this paper introduces Levy flight operations and propose the LSTOA. Secondly, utilizing the LSTOA, this paper constructs a DNA storage encoding set to facilitate random access while meeting combinatorial constraints. To demonstrate the coding performance of the LSTOA, this paper consists of analyses on 13 benchmark test functions, showcasing its superior performance. Furthermore, under the same combinatorial constraints, the LSTOA constructs larger DNA storage coding sets, effectively reducing the read–write latency and error rate of DNA storage. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

14 pages, 18423 KiB  
Article
The Application of Tsallis Entropy Based Self-Adaptive Algorithm for Multi-Threshold Image Segmentation
by Kailong Zhang, Mingyue He, Lijie Dong and Congjie Ou
Entropy 2024, 26(9), 777; https://doi.org/10.3390/e26090777 - 10 Sep 2024
Viewed by 406
Abstract
Tsallis entropy has been widely used in image thresholding because of its non-extensive properties. The non-extensive parameter q contained in this entropy plays an important role in various adaptive algorithms and has been successfully applied in bi-level image thresholding. In this paper, the [...] Read more.
Tsallis entropy has been widely used in image thresholding because of its non-extensive properties. The non-extensive parameter q contained in this entropy plays an important role in various adaptive algorithms and has been successfully applied in bi-level image thresholding. In this paper, the relationships between parameter q and pixels’ long-range correlations have been further studied within multi-threshold image segmentation. It is found that the pixels’ correlations are remarkable and stable for images generated by a known physical principle, such as infrared images, medical CT images, and color satellite remote sensing images. The corresponding non-extensive parameter q can be evaluated by using the self-adaptive Tsallis entropy algorithm. The results of this algorithm are compared with those of the Shannon entropy algorithm and the original Tsallis entropy algorithm in terms of quantitative image quality evaluation metrics PSNR (Peak Signal-to-Noise Ratio) and SSIM (Structural Similarity). Furthermore, we observed that for image series with the same background, the q values determined by the adaptive algorithm are consistently kept in a narrow range. Therefore, similar or identical scenes during imaging would produce similar strength of long-range correlations, which provides potential applications for unsupervised image processing. Full article
Show Figures

Figure 1

19 pages, 377 KiB  
Article
Modeling the Arrows of Time with Causal Multibaker Maps
by Aram Ebtekar and Marcus Hutter
Entropy 2024, 26(9), 776; https://doi.org/10.3390/e26090776 - 10 Sep 2024
Viewed by 420
Abstract
Why do we remember the past, and plan the future? We introduce a toy model in which to investigate emergent time asymmetries: the causal multibaker maps. These are reversible discrete-time dynamical systems with configurable causal interactions. Imposing a suitable initial condition or “Past [...] Read more.
Why do we remember the past, and plan the future? We introduce a toy model in which to investigate emergent time asymmetries: the causal multibaker maps. These are reversible discrete-time dynamical systems with configurable causal interactions. Imposing a suitable initial condition or “Past Hypothesis”, and then coarse-graining, yields a Pearlean locally causal structure. While it is more common to speculate that the other arrows of time arise from the thermodynamic arrow, our model instead takes the causal arrow as fundamental. From it, we obtain the thermodynamic and epistemic arrows of time. The epistemic arrow concerns records, which we define to be systems that encode the state of another system at another time, regardless of the latter system’s dynamics. Such records exist of the past, but not of the future. We close with informal discussions of the evolutionary and agential arrows of time, and their relevance to decision theory. Full article
(This article belongs to the Special Issue Time and Temporal Asymmetries)
Show Figures

Figure 1

21 pages, 3253 KiB  
Article
Probing Asymmetric Interactions with Time-Separated Mutual Information: A Case Study Using Golden Shiners
by Katherine Daftari, Michael L. Mayo, Bertrand H. Lemasson, James M. Biedenbach and Kevin R. Pilkiewicz
Entropy 2024, 26(9), 775; https://doi.org/10.3390/e26090775 - 10 Sep 2024
Viewed by 297
Abstract
Leader–follower modalities and other asymmetric interactions that drive the collective motion of organisms are often quantified using information theory metrics like transfer or causation entropy. These metrics are difficult to accurately evaluate without a much larger number of data than is typically available [...] Read more.
Leader–follower modalities and other asymmetric interactions that drive the collective motion of organisms are often quantified using information theory metrics like transfer or causation entropy. These metrics are difficult to accurately evaluate without a much larger number of data than is typically available from a time series of animal trajectories collected in the field or from experiments. In this paper, we use a generalized leader–follower model to argue that the time-separated mutual information between two organism positions can serve as an alternative metric for capturing asymmetric correlations that is much less data intensive and more accurately estimated by popular k-nearest neighbor algorithms than transfer entropy. Our model predicts a local maximum of this mutual information at a time separation value corresponding to the fundamental reaction timescale of the follower organism. We confirm this prediction by analyzing time series trajectories recorded for a pair of golden shiner fish circling an annular tank. Full article
(This article belongs to the Section Information Theory, Probability and Statistics)
Show Figures

Figure 1

15 pages, 44599 KiB  
Article
A Statistical Analysis of Fluid Interface Fluctuations: Exploring the Role of Viscosity Ratio
by Selwin Heijkoop, David Rieder, Marcel Moura, Maja Rücker and Catherine Spurin
Entropy 2024, 26(9), 774; https://doi.org/10.3390/e26090774 - 10 Sep 2024
Viewed by 321
Abstract
Understanding multiphase flow through porous media is integral to geologic carbon storage or hydrogen storage. The current modelling framework assumes each fluid present in the subsurface flows in its own continuously connected pathway. The restriction in flow caused by the presence of another [...] Read more.
Understanding multiphase flow through porous media is integral to geologic carbon storage or hydrogen storage. The current modelling framework assumes each fluid present in the subsurface flows in its own continuously connected pathway. The restriction in flow caused by the presence of another fluid is modelled using relative permeability functions. However, dynamic fluid interfaces have been observed in experimental data, and these are not accounted for in relative permeability functions. In this work, we explore the occurrence of fluid fluctuations in the context of sizes, locations, and frequencies by altering the viscosity ratio for two-phase flow. We see that the fluctuations alter the connectivity of the fluid phases, which, in turn, influences the relative permeability of the fluid phases present. Full article
(This article belongs to the Special Issue Statistical Mechanics of Porous Media Flow)
Show Figures

Figure 1

Previous Issue
Back to TopTop