Enough Trace

Question:      How many Database Server trace files does it take for Oracle Support to change a light bulb?

Answer:      We need 2 more Database Server trace files to tell you.

Vicken Khachadourian

     As an Oracle Database Support Escalation Support Engineer, Vicken discovered and solved one of the biggest challenges for the modern day incarnation of Artificial Intelligence, 28 years before its biggest leaders started to discuss it.

What the AI community calls hallucinations in Large Language Models started in 1995 at Oracle, with Oracle's error messages and diagnostics.

Vicken discovered that in AI based systems, data and context get de-coupled, resulting in catastrophic failures. By mid 1996, Vicken wrote a paper about it, which Oracle fought against until 2007. The industry does not have the problem solved today. Vicken's 100% success record has been covered up.

In this Youtube video, in the year 2024, nVidia's Jensen Huang and OpenAI's Ilya Sutskever are discussing that challenge. Jensen uses 2 examples on how the words "Great" and "Sick" can come in many contexts. Forward to timestamp 3:50.

nVidia's Jensen Huang and OpenAI's Ilya Sutskever Youtube video. Forward to timestamp 3:50 ----- If this link gets broken or if Youtube removes it, please contact Vicken

Check the: About Vicken and Context Aware Computing Initiator links on this website to see the evidence. Scroll down to the red arrow on those sites.

Jensen and Ilya are working on Large Language Models where if they make a mistake, all they get is a funny sentence. In Vicken's case, he worked to make sure Oracle's diagnostics were in context, with real production data on 400 of the toughest cases at Oracle, where, for worldwide governments and business, every minute was worth millions of dollars. A mistake by Vicken would have cost millions / billions of dollars, not just produce a laughable sentence in ChatGPT or Elon's Grok.

In this Youtube video, at 02:04:42, Elon Musk, during the June 2024 Shareholders meeting is saying that on many occasions his AI model will fix one problem but it will create another one. He's 28 years late compared to Vicken, who maintained a 100% success with this challenge on 400 of the toughest cases at Oracle. Normal, traditional bug fixing will not fix this problem. Vicken and Oracle have the proof. At 02:09:15 he is saying he's happy that many drivers do not use Autopilot or FSD, because that way he can use FSD in Shadow Mode, and learn by making comparisons. Proves that the TESLA's entire software stack is always in play. ----- If this link gets broken or if Youtube removes it, please contact Vicken

With his 100% success at Oracle Support, starting in 1995 until the present time, Vicken Khachadourian overcame the context-related obstacles in the LLM hallucination challenge. That's at least 2 decades before Google, TESLA, OpenAI, Musk's xAI and Grok, Microsoft's Copilot, Oracle and others are still trying to solve.

Every credible authority in Artificial Intelligence acknowledges the crucial, if not the most important role of context as the main cause of Large Language Model (LLM) hallucinations.

Here is what a Google search says about it.

Before LLM's, by mid 1995, Oracle's diags and error messages became like the nodes or neurons in the massively built Neural Networks that are used in all AI implementations. By 1995 - 96, Oracle's database used to produce up to 500 phone books worth of diagnostic output per failure, with highly granular diag data. Support Engineers and Oracle developers were the human processing layer in diagnosing problems, piecing together those neurons. I discovered the crucial role of context, championed it by more than a decade before anybody else in Oracle, therefore the industry. On 400 of the toughest customer database failure cases as an Oracle Escalation Database Support Rep, on every case, without exception, I overcame the LLM hallucination challenge, with Oracle's diags as neurons. I maintained a 100% success rate.

With my initiative, Oracle conducted a 2.5 year intensive investigation and proved the following about Artificial Intelligence / Machine Learning. A company like Oracle does not conduct a 2.5 year investigation over nothing:

1 - All data, always, come with a built in component: context. Context gives data its accuracy and relevancy.

2 - In AI based systems, when massive amounts of data get collected from multiple sources, context and data get-de-coupled. This results in deadly miscalculations. Among them are TESLA and autonomnous car accidents, airplane crashes and Oracle based IT failures. Hallucinations are specific to LLM's, context applies to all the mentioned areas.

The reason why industry players are focusing on LLM's is because they need a safe way to practice this misguided approach, that my evidence proves has not only been failing since 1996, but it's been getting worse every day. My evidence proves that Sam Altman is 100% wrong when he's saying his ChatGPT is currently is in its stupidest state, and that it's going to improve from here. Oracle's 2.5 year investigation proves him wrong.

This model was in its most accomplished and smartest state in 1996 when I succeeded with the Orlando Oracle Database Support engineers, a newly formed group that in 18 months grew from 20 people to 70. Oracle's managers used my success as proof of concept. Since then it's been getting dumber and dumber, first in Oracle and now in the entire industry. My success has been 100% despite all the attacks they subjected me to. When an LLM hallucinates, the result is a senseless paragraph. Nobody dies. When a TESLA software processes data out-of-context, people die. Maybe now we know why Elon Musk wants his GROK to give out the "Funniest" answers, as if he has control over what it's going to do, and is deliberately making it come out that way.

With people being burnt in fiery TESLA crashes again and again, they have to pour billions into making LLMs work first. The first 5 pages of my crazy resume, on and on goes into this issue. Leaders in the USA complain about intellectual property theft by China. Of course I'm on team USA, but nobody talks about how high tech companies have routinely confiscated the innovation of small ones, to kill innovation. They have meetings with NDA's signed to "Evaluate" software. After "Evaluating" the innovative product they tell the small player their software was not good, just to release their own version a year later. How upper managers steal innovation from employees and then try to destroy the innovating employee. That's what they tried with me, even though I was giving Oracle everything. So far, their failures are getting worse and worse by the day.

As an Oracle employee I always took the position that what I discovered then belonged to Oracle. I gave them what I had, but that was not enough for the managers, who became millionaire VP's. They knew that if they acknowledged my discovery as valid on its merits, their careers could take a hit, so, since 1996, I have been watching the biggest playes in high tech try to match my context-related success.

Sadly, the biggest failure so far took place on October 7, 2023, with the Israel-Hamas war, followed by the World Central Kitchen convoy attack on April 1, 2024. According to Israel's official report, their AI system mistook a bag to a weapon, mislabeling an aid worker as a militant.

A month before October 7, 2023, Benjamin Netanyahu came to Fremont, CA, and praised Elon Musk for his AI initiatives.

A few months before October 7, 2023, Oracle published a study called "The Decision Dilemma" in which Oracle argued that 70% of business and government decision makers want robots to make their important decisions, when massive amounts of data is being processed. Of course I'm pro automation and technology, but we're galaxies and centuries away from computers beating human intelligence. Oracle's 2.5 year study proves what I'm presenting.

The entire world knows that Israel built a functionality rich, threat monitoring system, designed to detect and avert what they viewed as threats from the Gaza Strip. Their system was collecting massive amounts of data from a diverse set of sources. Oracle's study proves that such a system was highly vulnerable to miscalculations and false conclusions, because the de-coupling of data from its context was present.

According to a New York Times article, Tzachi Hanegbi, Israel’s national security adviser was telling the Israeli Security Chiefs that the danger to Israel in the weeks before October 7, 2023 was from Lebanon. Hamas was barely mentioned.

3 - Faster processing without context will send systems to disaster even faster. Telling drivers to take charge of a disastrous, autonomous, potential-car-accident situation, in a moment's notice is impossible.

On many occasions, taking charge of such a car is like stopping SMS/texts from travelling in cyberspace, after you already pressed "Send". You'll have no control. The text will go where it's destined to go.

4 - The high tech industry has a widely held belief that when a programmer fixes a bug, he's getting ahead. Sometimes fixing a bug exposes other bugs, but the belief is that fixing bugs will eventually get you ahead. I proved that there are structures within which a programmer operates, where no matter how many bugs you fix, you will not only not get ahead, but you're maybe going backwards. I fixed this problem at Oracle on my cases, 400 of them with 100% success. I utilized my context based discovery for my success. I made sure context and data are never de-coupled.

Oracle has the conclusive study, proving all of these 4 points.

In the June 2024 shareholders meeting Elon Musk admits that using one AI model will fix one problem, but create another one. Forward to 02:04:42. At 02:09:15 he actually says that it's a good thing not everybody is using FSD, because he can run FSD in "Shadow mode" and observe how human drivers make decisions, so he can improve FSD. Is he experimenting on non suspecting human subjects? ----- If this link gets broken or if Youtube removes it, please contact Vicken

On June 12, 2023, during the quarterly earnings call, Oracle's CEO Safra Catz said Oracle "Implemented AI/Machine Learning capabilities years before anyone was talking about it"

100% true. It started with Oracle in 1996. Oracle fought the discovery for 11 years, tried to fix it, fired the mathematician who had a 100% success with it, because the supervising VP's wanted to take credit for the solution themselves and failed. Now, in 2024, it's spilled over to other areas of computing, unfortunately killing many TESLA victims.

TESLA's EDR and other forensic data has to be studied in context; otherwise, many innocent people will sit in jails to help Elon Musk "Save humanity". Between a driver's foot on the accelerator / brake and the tires of a TESLA, layers of software are present. When TESLA says the car was going at such and such speed and the driver's foot was on the accelerator, how do we know which part of the speed is from the driver, and which part is from the software?

There are Youtube videos in which Elon Musk is making a crowd laugh, saying that he wore a T-shirt with a STOP sign painted on it, and that his TESLA noticed the STOP sign from the T-shirt, and stopped. After the crowd laughs, he says that it's an easy fix. He says that he'll fix it.


Not true. Neither Elon Musk nor the high tech industry have a fix to this problem. They are fixing individual episodes of failure, but the systemic failure is getting exponentially worse. Oracle based systems are being built on false assumptions.

In late April, 2023, TESLA's lawyers tried to shield Elon Musk from being placed under oath by telling Judge Evette Pennypacker that the videos referenced on Youtube in the above link could be deep fakes. The judge rejected their talk and ordered Mr. Musk to participate in a 3-hour deposition. It's no surprise that TESLA settled that case. Musk's credibility will get destroyed if he's sworn under oath.

Elon Musk is galaxies away from fixing this problem, which Oracle has known about, since 1996. It was small then, It's an epidemic now. The T-shirt is one specific example, but this problem exists everywhere in AI based systems. It's gonna get worse. A TESLA does not evaluate a STOP sign like humans do. It is collecting and evaluating billions of pixels. It's just one example out of millions.

During Oracle's official investigation into this matter I labeled it as: Processing Data out-of-context. The AI experts for Large Language Models are calling it: Hallucination. Google's CEO Sundar Pichai, in the USA news magazine TV show 60-Minutes admitted that their system sometimes hallucinates.

In mid 2023, 7 years after promising Full Self Driving to the world, Elon Musk finally ditched his fatally flawed, rules-based TESLA FSD and is now promoting a Network-Path-Based approach, analyzing the billions of video clips TESLA cars fed into his system. Finally, he agreed with me to use technical stories to put AI data in context. See the first page of my resume. Elon is 27 years late. For the sake of everyone involved, I wish him plenty of luck. I'm not holding my breath.

I also saved a $450 million / year company called Southwest Traders from going under, got them a full refund from Oracle and saved 850 jobs. That achievement of mine gave the famous lawyer Tom Girardi of Girardi & Keese a $400 million lawsuit, which they took on contingency. My suspicion is that Tom Girardi and his law firm got destroyed, because they have the lawyer's version of my story about my AI related discovery, which will easily implicate Oracle.

Oracle's patent history in the following link is one piece of proof. There are many others.

Oracle patent history - no context related patents before 2007, more than 200 patents afterwards

After all those patents, in 2023, Oracle, and its close collaborator companies Salesforce.com and nVidia invested $270 million into the company called Cohere to find context based solutions. If Oracle had the solution that matched my success when they fired me, they would not have invested that kind of money into Cohere. This also means that Oracle could not project that they can solve the problem with their in-house developers, even though I consider Oracle's developers some of the best in the world. I worked with them routinely. Here are the links:

Oracle's $270 million investment into Cohere 1 - June 2023

Oracle's $270 million investment into Cohere 2 - June 2023

6 months later in Jan 2024 they upped the investment to $1 billion

2 months later, in March - June 2024 they increased the investments to get a valuation of $5 billion. They are under pressure

The FBI Director, Christopher Wray agreed with this problem and how deadly it can be when he featured it in his presentation in Davos Switzerland - January 2023 - World Economic Forum.

The FBI Director agreed with the risk at the World Economic Forum - Jan 2023, Davos Switzerland

I warned Oracle in 1996, soon after Oracle introduced the max_dump_file_size parameter, and blew the whistle internally at Oracle in 2005, proving that Oracle's Trace Files were misleading all technicians around the world. The problem is much bigger today, globally, estimated in hundreds of billions of dollars over 10 years. Henry Kissinger, the previous CEO of Google, Eric Schmidt and the Dean of MIT's Computer Science Dan Huttonlocher agree with this. So do the current CEO's of Oracle and Amazon, Safra Catz and Andy Jassy.

Agreeing with me is one thing. Having a 26 year history of 100% success with 400 of the toughest cases at Oracle is what I have. None of them have that.

That's why I'm a blacklisted mathematician. I have the solution. They don't.