Blog

Enhancing signal And de-noising graph neural network aggregation mechanisms

What is the prevalent problems in graphical neural networks is the problem of context propagation. During the propagation of content as is described by elite variable with the graph aggregation mechanism we often lose information or we suppress signal in favor of noise. Noise is often also propagated throughout neural networks when we try to do neighborhood aggregation within a note. These commonly known problems are usually addressed by sampling. We tend to ups and pull the signal pan down sample the noise and empirically determined whether this aggregation can be made more effective

context brings insight

without context there is no deep insight into a situation. So -called “Situational Awareness” is a vigilance of our surroundings, to find clues and indications of how to respond; in one word, context; bringing contextual insight.

What is the main problems with capturing context effectively is the fact the context tends to be highly time dependent and time sensitive. Therefore it is ephemeral. How can we capture the ephemeral aspects of context while exploring the different nodes in a graph neural network?

The commonality with the decision making process and decision science is a decision making is highly temperate we make decisions within certain circumstances as markets are going up and down we may enter a market in the financial services market when we feel that we’re getting in at the right time we sell when we think the difference between what we accomplished and the market is sufficient.

You don’t need to make decisions we need to consider that decision point as a graph in a neural network. As we aggregate additional neighboring nodes we can aggregate additional context that will benefit us in the decision making process. Meaning that the information content for that note has to exceed a certain amount before the decision can be thought to be a solid decision.

Therefore if we are able to aggregate a sufficient number of neighboring nodes in the graph neural network problem we will have enough information to make a decision. However this decision is going to be contingent upon the domain in that domain what we can see is that we have a number of adjacent notes we need to create in bidding from all of those adjacent notes in order for us to be able to create an embedding that has enough context that we can then provide that with a decision making capability

To use an example if we were accelerating and we come to the street sign that shows 45 miles an hour as shown in the diagram but that is hidden from us behind the tree we are not going to have the adequate knowledge in order to pick up that signal to make a decision to decelerate our car and but we may still get caught for speeding.

Contextual AI

Today, The state of the art is to compute large spreadsheets of data or matrices to discover underlying patterns.

We’ve gone from crafting manual rules to rule engines using imperative programming to a more induction oriented statistical modeling.

The next phase is in putting more intelligence into AI, such as reasoning or decision making. This change requires contextual AI, and Emogi is defining that space.

AI involves more than statistics; it involves results from psycholinguistics, cognitive science, and psychology. AI uses them all to perceive the environment and evolve on its own by rewriting its hypotheses and testing them without procedures that prompt it to do so.