Jose Crespo’s Post

View profile for Jose Crespo, graphic

Mathematician lurking in the Tech Underworld

The AI/ML is overkill and a stupid thing when the problem is well-known mathematically, like the dynamic traces of routing and connections between drivers and customers in #Uber. These Uber guys use a very expensive AI/ML platform with zillions of data in a distributed database. In reality, mathematically, you can reduce the problem to a multiple domination graph problem. Yes, you can argue that it is an NP problem and you need the brute force of zillions of AI mules. However, modern graph theory combines set theory with combinatorics and probability theory, allowing you to transform an NP problem into a P problem by calculating upper and lower bounds. So, in the end, you only need a simple, cheap laptop for that, given the multi-domination theorems and a very simple algorithm with quadratic O complexity, making it a P-solvable problem instead of the NP brute force of Uber.

View profile for SOUMEN S., graphic

Author, Technical Leader & Manager @ Tech Companies | Software Development Methodologies

You Cannot Be At Two Places At Once = You say : Duh! It is obvious. Please understand this: ChatGPT is not rule based - no rule, however obvious to us, is known to ChatGPT. Case in point is a recent experiment by Pranab Ghosh on a simple Blocks World problem : I have A on top of B on top of C - get me C ==> B ==> A (or something similar). While solving this - ChatGPT violated rules of the Blocks World. In reality it means simply this: ChatGPT intelligence (if it has any intelligence) will not be suitable for Robotics. Robots need to follow rules like not bumping into humans or trample babies. = Shekhar Veera let me know about the Cyc Project today. Cyc is a long-term artificial intelligence project that aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works. Hoping to capture common sense knowledge, Cyc focuses on implicit knowledge that other AI platforms may take for granted. This is contrasted with facts one might find somewhere on the internet or retrieve via a search engine or Wikipedia. Cyc enables semantic reasoners to perform human-like reasoning and be less "brittle" when confronted with novel situations. Douglas Lenat began the project in July 1984 at MCC, where he was Principal Scientist 1984–1994, and then, since January 1995, has been under active development by the Cycorp company, where he was the CEO. The Cyc project has been described as "one of the most controversial endeavors of the artificial intelligence history". Catherine Havasi, CEO of Luminoso, says that Cyc is the predecessor project to IBM's Watson. Machine-learning scientist Pedro Domingos refers to the project as a "catastrophic failure" for several reasons, including the unending amount of data required to produce any viable results and the inability for Cyc to evolve on its own. Gary Marcus, a professor of psychology and neural science at New York University and the cofounder of an AI company called Geometric Intelligence, says "it represents an approach that is very different from all the deep-learning stuff that has been in the news.”[30] This is consistent with Doug Lenat's position that "Sometimes the veneer of intelligence is not enough". Every few years since it began publishing (1993), there is a new Wired Magazine article about Cyc, some positive and some negative (including one issue which contained one of each). = I will stop here. Now we are at a juncture; We have billion dollar LLMs which do not know about basic obvious rules like you cannot move a block on which another block is resting. On the other hand promising projects like Cyc will not see the light of the day since it will not get the billions of dollars it may need. What is the conclusion: People like Pranab Ghosh should stop expecting intelligence out of AI. In the following I have suggested 3 definitions of AI: 1) AI : Aspirational Intelligence 2) AI : Artificial Ignorance 3) AI : Absurd Intentions

Douglas Lenat, Who Tried to Make A.I. More Human, Dies at 72

Douglas Lenat, Who Tried to Make A.I. More Human, Dies at 72

https://www.nytimes.com

To view or add a comment, sign in

Explore topics