Medal, a startup best known for its video game cutting products, announced it has raised $13 million at a $333 million valuation from multiple investors, including Horizons Ventures, OMERS Ventures, peak6 and Arcadia Investment Partners.
The company also unveiled Highlight, a new cross-platform desktop app that acts as a contextual, AI-powered assistant for users. The application can capture the content on the screen and ask questions in a major language model (LLM) according to this context.
Henry Gladvin, a partner of Omers Ventures, stated that the callCrunch on the phone was Techniccrunchch, according to the opportunity to apply a venture capital company to apply to LLM using the major technology of the medal.
"Therefore, the main medal technology is based on the idea of understanding what is happening on someone's device, such as video, audio, and what happens. Originally it was used for pruning. Now the company is taking this technology and applying it to LL.M.s as a guide, and it's a smart use,” Gladwin said. Gladwin added that he saw medals as a product that recorded the highest part in virtual life, as well as game companies. And the highlight application is the natural expansion of this paradigm. For years, companies have tried to create assistants that use the information displayed on a screen to be useful to users, and Google has been trying for years with Google Now, Google Assistant, and now Gemini. Apple made inroads in this space at last month's Worldwide Developers Conference (WWDC) when it announced Apple Intelligence and its ability to understand contextual information on the screen. Microsoft is also leveraging generative artificial intelligence with Windows Recall, a feature that helps users find content they've viewed in the past. After the initial announcement, Microsoft decided to delay the launch of Recall. The main point is to do it for the office. In the current repetition, the application exists as a floating button on the desk. Every time you fly on the icon, the content on the screen is captured and passed as a context of a different model. You can choose other questions using various tools, such as Chatgpt, Claude, and humanitarian confusion.
Based on a variety of models, the application absorbs some questions that are useful for starting work earlier. Capture occurs locally, and applications do not store content. The company is building its own assistant similar to ChatGPT, which may be less efficient than cloud models for some tasks, but ultimately the assistant can run locally on the device. In addition to on-screen content, documents and system audio memory can be passed as highlight context. To address voice use cases, the company is developing a local meeting transcription application similar to tools like Granola, Limitless, and Krisp.