This is a article that explores the capabilities of OpenAI’s o3 model. HN
The impacts of AI reasoning are getting closer and closer to surpassing human capabilities.
But running it to solve a problem is extremely expensive.
This is a article that explores the capabilities of OpenAI’s o3 model. HN
The impacts of AI reasoning are getting closer and closer to surpassing human capabilities.
But running it to solve a problem is extremely expensive.
I stumbled on HN over this very interesting article about a new kind of context memory system that, is able to remove information that is “unhelpful or redundant details”.
Thinking further, i think this would be super helpful for semantic search, that is currently not very performant due to the missing filters that extract importance. I have tried to counter this problem until now via summarization through small LLMs, but as one might guess turns out as not very precise and super expensive. There are other ideas one could post process text with LLMs but they are not very efficient either.
-->