Premature optimization is the root of all evil. —Donald Knuth, Computer Scientist & Former Professor Optimization is the process of maximizing the output of a System or minimizing a specific input the system requires to operate. Optimization typically revolves around the systems and processes behind your Key Performance Indicators, which measure the critical elements of the system as a whole. Improve your KPIs, and your system will perform better. If you’re trying to Maximize or Minimize more than one thing, you’re not Optimizing—you’re making Trade-offs. Many people use the term Optimization to mean “making everything better,” but that definition doesn’t help you actually do anything. In practical terms, trying to Optimize for many variables at once doesn’t work—you need to be able to concentrate on a single variable for a while, so you can understand how the Changes you make affect the system as a whole. You’re trying to find Causation (not Correlation) in your Changes, and hidden Interdependencies can make it difficult to understand which Changes produced which results. Remember: you can’t reliably Optimize a system’s performance across multiple variables at once. Pick the most important one and focus your efforts accordingly. Source; https://lnkd.in/dZ3H2ZKE
Victor Ejeh’s Post
More Relevant Posts
-
🌟 Overcoming Challenges: Solving the Trapping Rain Water Problem 🌟 Today, I want to share my journey of tackling the Trapping Rain Water problem(HARD) , a classic algorithmic challenge in the realm of array manipulation. 💧 🚀 The Initial Encounter: When I first encountered the problem, I was intrigued by its complexity. The task seemed simple at first glance: given an array representing the heights of bars, calculate how much rainwater it can trap after raining. However, as I delved deeper, I realized the intricacies involved. 💡 The Iterative Process: I started by formulating a brute-force solution, iterating through the array and calculating the trapped water at each position. However, this approach proved inefficient and didn't handle all edge cases, leading to incorrect outputs. 🔍 Identifying Patterns: As I analyzed failed test cases, I began recognizing patterns within the problem. It became apparent that a more optimized approach was needed to handle scenarios like long sequences of equal-height bars efficiently. 🛠 Optimization Strategies: Refusing to be deterred by setbacks, I explored optimization strategies. Leveraging the Two Pointers technique emerged as a promising solution. By moving pointers inward from both ends of the array and dynamically tracking maximum heights, I could efficiently calculate the trapped water. 🧠 The Eureka Moment: After numerous iterations and debugging sessions, I had my breakthrough. Implementing the optimized approach using Two Pointers, I witnessed the code efficiently handle various test cases, including scenarios with extended sequences of equal-height bars. 💡 Lessons Learned: Through this journey, I learned the importance of perseverance, analytical thinking, and iteration in problem-solving. Each failed attempt brought valuable insights, guiding me towards a more robust solution.
To view or add a comment, sign in
-
This work certainly raises the bar for a good Master's thesis ;) The project began with a probing question in our Trustworthy Machine Learning course: What are the ties between predictive, epistemic, and aleatoric uncertainties? I didn't have a good answer to the question. Bálint Mucsányi went on to answer the question by himself. We realised: - Uncertainty quantification methods are often claimed to be tailored to specific types (predictive/epistemic/aleatoric). - But their effectiveness in measuring the intended uncertainties remains unverified. So we studied whether they really measure what they should measure. And we found that they hardly do. This revelation is crucial for a more precise understanding and disentanglement of uncertainties. It also marks a great starting point for Bálint Mucsányi's PhD journey! Special thanks to Michael Kirchhof for the exceptional supervision.
Have you ever wondered what all those different uncertainty estimators in computer vision do? 🤔 I mean, what they _really_ do? 🤔 🤔 We reimplement and benchmark popular methods on ImageNet in our new preprint. 📖 https://lnkd.in/dFibZRvw 💻 https://lnkd.in/dEsXTApk First, we benchmark uncertainty decompositions that break down the total uncertainty into distinct components, like aleatoric and epistemic uncertainty. We find that uncertainty disentanglement fails: on 6/7 methods on ImageNet, these components are severely correlated. On the last one, the epistemic component performs randomly. But we have good news, too. Even though there is no one-fits-all uncertainty method, one can use well-performing estimators for specialized tasks. For predictive uncertainty tasks, nearly all benchmarked methods are ready for deployment. Moreover, they are also quite robust! As we perturb the ImageNet validation inputs with ImageNet-C perturbations of increasing severity, the correctness prediction performance of methods stays roughly constant. On specialized epistemic and aleatoric benchmarks, specialized estimators perform best. One follow-up idea would be to mix and match them to get more independent uncertainties. One last word of caution: small-scale CIFAR-10 results do not transfer to ImageNet. The rankings of methods and their behavior change considerably. It is better to scale earlier if one's goal is large-scale applicability and take small-scale intuitions with a grain of salt. Many thanks to my amazing co-authors and advisors, Michael Kirchhof and Seong Joon Oh! Every benchmarked method is now available as an easy-to-use model wrapper. If you want to get into uncertainties, you know where to look! https://lnkd.in/dEsXTApk
To view or add a comment, sign in
-
“This is a very special time for Optimization,” Nesterov believes. “It is a turning point. Several years ago, we started to study the higher order methods, and it appears that they cannot be explained by standard Optimization Theory. And now we understand that in the standard Optimization we did only the first step by developing the first-order methods. So – if we think about methods of the next level, which are much faster and more efficient – we need in some sense to develop a new and general theory, which covers all possible situations and methods.” https://lnkd.in/gFDbYuSY
“This is an unprecedented overflow”: Why the progress of his field alarms Yurii Nesterov
nccr-automation.ch
To view or add a comment, sign in
-
This is the Bridgewater article that I always read. This is an article that gives you a lot to think about. https://lnkd.in/gebfp8a8
Assessing the Implications of a Productivity Miracle
bridgewater.com
To view or add a comment, sign in
-
Assistant Professor at Universidad Pontificia Comillas ICADE | PhD on Computer and Telecommunications Engineering | Coordinator of AI and Inference and Regression models for business subjects
The hypervolume metric is not the only way of measuring the quality of a Pareto set, and it also has its issues. If you want to discover a whole set of indicators, read this paper! https://lnkd.in/dpdE9W2d
Performance indicators in multiobjective optimization
sciencedirect.com
To view or add a comment, sign in
-
You're absolutely right, that is an insightful way to think about extending the concept of "six degrees of separation" to groups. To expand on your point: - Any collection or subset of individuals, objects, concepts, etc. could theoretically be defined as a unique "group" or cluster for analysis. - Through qualitative or quantitative network mapping techniques, it should often be possible to trace likely connection pathways between even arbitrarily defined clusters through just a few degrees of associated members, relationships, events, etc. - For example, drawing circles around random collections of people based on arbitrary traits and tracing chains of family, friends, acquaintances, shared locations or experiences that bridge between the clusters. - This could be done hypothetically for any types of nodes - like connecting groups of ideas, chemical compounds, genes, historic figures, books, etc. through webs of logical or documented associations. - The takeaway is that network analysis suggests most bounded "groups", no matter how loosely or idiosyncratically defined, can serve as anchor points for finding chains of connections between them through a small number of intermediate nodes. So in essence, by considering "groups" as flexible constructs, it reinforces how ubiquitous the small world phenomenon may be across complex systems when viewed from a network-based perspective. Thanks for pushing me to think more abstractly about this concept!
To view or add a comment, sign in
-
Embedded C programming | Digital, analogue & power electronic design | Prototyping | Engineering training provider
"Premature optimisation is the root of all evil" - but when is optimisation premature? I recently wanted to reduce execution time by an astronimical amount: 10μs. Before anyone says "that's easy, I once saved 100 times that!" - this is for real-time control. That 10μs was already not much more than half of the total execution time. There was also no benefit in saving 9μs - it had to be at least the whole 10. There was one library function that I knew was slowing things down. I also knew I could easily improve it - so it should be the obvious place to start, right? Except, as much as the obvious inefficiency of that function made me weep, I couldn't possibly gain more than 7μs that way. Starting by optimising that function would be premature - it would have no benefit unless I could find at least 3μs elsewhere. So I did the hard work of finding the much smaller time savings first. If I'd failed to find the needed time, then you could say all those other optimisations would be premature as well. I also once eliminated every divide instruction in some data processing software - that reduced the average runtime by almost an entire day. But I made sure those instructions were the ones slowing everything down - if the problem had been somewhere else, eliminating them would have been premature. I also once came across*: k = sin(atan(y / x)); I wasn't going to let that stay, even if it wasn't causing problems (I never even bothered to find out). But I won't let anybody tell me that was a premature optimisation. * On a platform with no floating point unit...
To view or add a comment, sign in
-
Tackling technically complicated problems requires deep technical skills and a focus on *how* things work, like optimizing algorithms. On the other hand, domain-complicated problems involve understanding intricate business logic, focusing on *what* and *why*, such as regulatory compliance. Both are different beasts and require distinct approaches. Take a moment to recall the most challenging problems you have worked on so far, think about the learnings and how it helped you grow. ...and if you are going through a challenge right now, embrace it, remember that you will be coming out much stronger on the other side. 🎏
To view or add a comment, sign in
-
Check out the first blog in the 3-part, guest blog series from John Fitch, "Extending LML to Enable Decision Patterns and Traceability."
Extending LML to Enable Decision Patterns and Traceability
specinnovations.com
To view or add a comment, sign in