«Automata Based Multivariate Time Series Analysis for Anomaly Detection over Sliding Time Windows» - travaux auxquels a collaboré Claude-Guy Quimper publiés dans Engineering Proceedings. À lire ici: https://ow.ly/SbnZ50RPk2F
Institut intelligence et données (IID) de l'Université Laval’s Post
More Relevant Posts
-
Here you go! At this link you can find the paper “Iterative hierarchical clustering algorithm for automated operational modal analysis” recently published in the Journal of Automation in Construction. Here, a new fast and robust iterative hierarchical clustering algorithm for automated dynamic identification is formulated and validated through global sensitivity analyses. Free to share and free to download!
Iterative hierarchical clustering algorithm for automated operational modal analysis
sciencedirect.com
To view or add a comment, sign in
-
Spectral Processing for Time Stamped Signals: Theory, Implementation, and Application Examples: CI recently released an accurate time stamping technology that uses GPS as a time source. The technology is deployed on the CoCo-80X portable analyzer and GRS, CI’s latest product for acoustic applications. The time marks can be as accurate as 100 nanoseconds. Using this time stamping technology, we developed the theory and algorithms to calculate the auto and cross spectrum between measurement channels on separate data acquisition units that can be distanced thousands of miles apart. https://lnkd.in/gasRYEMh
Spectral Processing for GPS Time Stamped Signals
crystalinstruments.com
To view or add a comment, sign in
-
Allan variance is a valuable statistical tool for measuring the stability of a system. Learn more about what it is, how it's used, and how the Moku Phasemeter can help.
Understanding and performing Allan variance measurements
https://www.liquidinstruments.com
To view or add a comment, sign in
-
Two papers accepted at international conferences. 1. Algorithms for Galois Words: Detection, Factorization, and Rotation, CPM 2024. arXiv: https://lnkd.in/eR84DM6Q 2. Breaking a Barrier in Constructing Compact Indexes for Parameterized Pattern Matching, ICALP 2024. arXiv: https://lnkd.in/eMjukvQy
Algorithms for Galois Words: Detection, Factorization, and Rotation
arxiv.org
To view or add a comment, sign in
-
When fine-tuning LLMs, memory requirements by the optimizers is an area of research. Researchers at Fudan University proposed an optimizer LOMO (Low Memory Optimization) which is a modification of stochastic gradient descent that stores less data than other optimizers during fine-tuning. The authors fine-tuned separate instances of LLaMA-7B using LOMO and two popular optimizers, SGD and AdamW (a modified version of Adam). They required 14.6GB, 52.0GB, and 102.2GB of memory respectively. In particular, they all required the same amount of memory to store model parameters (12.55GB) and activations (1.79GB). However, when it came to gradients, LOMO required only 0.24GB while SGD and AdamW each required 12.55GB. The biggest difference was optimizer state memory: LOMO required 0GB, SGD required 25.1GB, and Adam required 75.31GB. The authors also compared LOMO to LoRA, which works with an optimizer (in this case, AdamW) to learn to change each layer’s weight matrix by a product of two smaller matrices. Why it matters: Methods like LoRA save memory by fine-tuning a small number of parameters relative to a network’s total parameter count. However, because it adjusts only a small number of parameters, the performance gain from fine-tuning is less than it could be. LOMO fine-tunes all parameters, maximizing performance gain while reducing memory requirements. You may check further details at: https://lnkd.in/gghFNVv6 #lora #lomo #machinelearning #statsomlai
GenAI
sites.google.com
To view or add a comment, sign in
-
enabling digital services for Student Loan related activities while maintaining the highest security standard, the most compliant personal data protection and customer-centric data-driven innovation.
Exciting new research in the field of communication and computation design! Check out this blog post on Semantic Communication with Probability Graph - a joint effort between the base station (BS) and the user to compress information. By employing probability graphs, the shared knowledge is represented, allowing for reduction in data size. Find out more about this innovative approach, including the allocation of communication and computation resources, in the full article at https://bit.ly/48G03Zh. Don't miss the simulation results showcasing the effectiveness of this system. #communication #computation #innovation
To view or add a comment, sign in
-
📃Scientific paper: Weighted Context-Free-Language Ordered Binary Decision Diagrams Abstract: Over the years, many variants of Binary Decision Diagrams (BDDs) have been developed to address the deficiencies of vanilla BDDs. A recent innovation is the Context-Free-Language Ordered BDD (CFLOBDD), a hierarchically structured decision diagram, akin to BDDs enhanced with a procedure-call mechanism, which allows substructures to be shared in ways not possible with BDDs. For some functions, CFLOBDDs are exponentially more succinct than BDDs. Unfortunately, the multi-terminal extension of CFLOBDDs, like multi-terminal BDDs, cannot efficiently represent functions of type B^n -> D, when the function's range has many different values. This paper addresses this limitation through a new data structure called Weighted CFLOBDDs (WCFLOBDDs). WCFLOBDDs extend CFLOBDDs using insights from the design of Weighted BDDs (WBDDs) -- BDD-like structures with weights on edges. We show that WCFLOBDDs can be exponentially more succinct than both WBDDs and CFLOBDDs. We also evaluate WCFLOBDDs for quantum-circuit simulation, and find that they perform better than WBDDs and CFLOBDDs on most benchmarks. With a 15-minute timeout, the number of qubits that can be handled by WCFLOBDDs is 1,048,576 for GHZ (1x over CFLOBDDs, 256x over WBDDs); 262,144 for BV and DJ (2x over CFLOBDDs, 64x over WBDDs); and 2,048 for QFT (128x over CFLOBDDs, 2x over WBDDs). ;Comment: 21 pages Discover the rest of the scientific article on es/iode ➡️https://etcse.fr/k3iT
Weighted Context-Free-Language Ordered Binary Decision Diagrams
To view or add a comment, sign in
-
Professeur des Universités en Ethologie et Primatologie. Membre de l'Institut Universitaire de France. Institut Pluridisciplinaire Hubert Curien, Université de Strasbourg-CNRS.
New recommendation in PCI Network Science Peer Community In: A theoretical and empirical evaluation of modularities for hypergraphs / Comparison of modularity-based approaches for nodes clustering in hypergraphs https://lnkd.in/ewSeCqSR
A theoretical and empirical evaluation of modularities for hypergraphs
networksci.peercommunityin.org
To view or add a comment, sign in
-
Ingénieur aéronautique et docteur en mathématiques appliquées | optimisation | automatique | théorie du contrôle.
Abstract:We consider deterministic nonlinear discrete-time systems whose inputs are generated by PI for undiscounted cost functions. We first assume that PI is recursively feasible, in the sense that the optimization problems solved at each iteration admit a solution. In this case, we provide novel conditions to establish recursive robust stability properties for a general attractor, meaning that the policies generated at each iteration ensure a robust $\mathcal {KL}$ -stability property with respect to a general state measure. We then derive novel explicit bounds on the mismatch between the (suboptimal) value function returned by PI at each iteration and the optimal one. However, we show by a counter-example that PI may fail to be recursively feasible, disallowing the mentioned stability and near-optimality guarantees. We therefore also present a modification of PI so that recursive feasibility is guaranteed a priori under mild conditions. This modified algorithm, called $\mathrm{PI}^{+}$ , is shown to preserve the recursive robust stability when the attractor is compact. Additionally, $\mathrm{PI}^{+}$ enjoys the same near-optimality properties as its PI counterpart under the same assumptions.
Robust Stability and Near-optimality for Policy Iteration: For Want of Recursive Feasibility, All is not Lost
ieeexplore.ieee.org
To view or add a comment, sign in
-
🔍 We encourage you to explore the recently published paper "Factored Systolic Arrays Based on Radix-8 Multiplication for Machine Learning Acceleration". 📝 Authored by: Kashif Inayat; Inayat Ullah; Jaeyong Chung Volume: 32, Issue: 7, July 2024 Systolic arrays (SAs) are re-gaining the attention as the heart to accelerate machine learning workloads. This article shows that a large design space exists at the logic level despite the simple structure of SAs and proposes two novel SAs based on factoring and radix-8 multipliers: The first factored SA (FSA) extracts out the booth encoding and the hard-multiple generation which is common across all processing elements (PEs), reducing the delay and the area of the whole SA. 🔗 Learn more on IEEE Xplore: https://loom.ly/-UKOeJw #PaperHighlight #TVLSI #VeryLargeScaleIntegrationSystems #VeryLargeScaleIntegration
To view or add a comment, sign in
6,058 followers