Microsoft Build 2025: CEO Nadella takes the platform, the “agent website” systems

“The great winners will be other people like you who will create applications, not only other people who believe platforms like us,” explains Satya Nadella, CEO of Microsoft.

Microsoft’s executive director, Satya Nadella, said the platform replacing synthetic intelligence “is entering intermediate sleeves” and presented the strategy based on its commercial systems to meet the AI ​​battery layer as the winning technique to democratize technology.

“We actually adopt a systems technique, a platform technique, which can expect from Microsoft, in each layer of battery,” Nadella said his main speech in Microsoft Build 2025. “The big winners will be other people like you who believe applications, not only other people who create platforms like us.

Nadella shared Redmond’s cloud and AI achievements and vision, washed.

[Related: Microsoft Build 2025: The news in AI, agents, Windows]

Microsoft’s Github and Github Co -Pilot allow an open ecosystem for software progression in the AI ​​era, Nadella said. Microsoft 365 Copilot, Copilot Studio and teams allow professional and commercial process agents. The foundry allows AI application and agents structure with all data. And all this applies the rails, the control of identity and mandatory security for companies.

These teams and more on the Microsoft battery will build what Nadella has called an open and scalable firm website, speaking with the roles of AI attendees because, perhaps, the next important interface for the Internet and to work.

“You can ask questions and AI attendees give us answers,” Nadella said. “You can assign responsibilities to agents and execute them. Or paintings with AI to complete the paintings and projects. And you can combine and adjust all those factors. “

There are more than Nadella said her speech, edited by duration and clarity.

Visual Studio and Family now have more than 50 million users. Github has 150 million users. Copilot de Github, in fact, used through more than 15 million developers. And we are just beginning.

As Github’s co -pilot has evolved the internal code vs, the AI ​​has so central in the way in which cod. And that is why we are a co -open supply co -pile in Code vs.

We will integrate those abilities promoted to the center of the VS Code, taking them to the same popular (repository) of open source that feeds the maximum popular progression tool in the world.

We will also continue to build Copilot Github. In fact, in recent years, we have approved code supplements to speak with multiple adjustments and now agents.

This same scheme arises more widely on the agent website. You can ask questions and AI attendees give us answers. It can assign responsibilities to agents and execute them. Or paintings with complete works and projects. And you can combine and adjust all those factors.

We build the modernization of programs in agent mode. Copilot is now to update frames like . . . Arraynet 6 Toarraynet Nine and migrate any application on the site to the cloud.

Create a plan for your code, dependencies, suggest corrections on the road, learn the adjustments that you make and make the entire procedure transparent.

Think of one of the pain problems for one of us: waking up at night to deal with a live problem.

Agent (AI) SRE (engineering reliability site) begins the classification, the root deposit, the attenuation of the challenge, then records the control report as a github challenge with all the fixing elements.

And from there, you can even assign fixing pieces to the github co -driver.

(The complete coding agent of Github Copilot carries the co -pilot of the pair of programmers to a homologous programmer.

You can assign co -pilot disorders. Correction of errors, new characteristics, code, and will end these responsibilities independently.

Establish a branch. He . . . creates a virtual device Github actions. It includes a public relations allocation (traction application) to consultation newspapers. In fact, you can return to the consultation newspapers and continue seeing the total task of PRS while it works.

The coding agent meets all security measures while offering a wonderful developer experience.

Use MCP servers (model context protocol) configured through a developer. We can take other agents to make code reviews and keep other people in the loop before executing a CD / CD (integration without stopping / delivery without stopping) or fusion.

I do not believe that since the launch of the teams, we have had an update of this level. Bring combined cats, research, notebooks, creation and agents in this intuitive scaffolding.

I say that this is the user interface (user interface) for AI. And the cat, for example, is based on the knowledge of the Internet, as well as the knowledge of its paintings. This adjusts the situation, specifically with the pages (co -pilot).

The search works in all its applications, confluence or Google Drive or Jira or Servicenow. Not only M365 knowledge. With the notebooks, I can now create those heterogeneous knowledge collections. GOOD. In fact, I can have cats and pages and all documents, emails, all in this collection. I can draw all those audio or podcast reviews.

I can use create a PowerPoint in a new explanatory video or generate an image. And with respect to agents, we have some special agents. As the researcher.

Possibly it would have been the biggest game change for me, because it synthesizes on the Internet and commercial sources, applying a deep chain of ideas for any topic or project.

The analyst passes raw knowledge in several files: I can download a pile of Excel files. You will get information, you will do forecasts, you will do all the visualizations. These agents are all, in the end, put the experience at hand.

We are at this age when we are going to put a hand.

We present a new elegance of corporate agents that can create adjusted knowledge in knowledge, workflows and the taste of your business. We call it co -pilot adjustment.

It is a consultation of paying the co -piloto for the client, the company, the company. Copilot can now inform the exclusive tone and language of your business. And soon it will happen more, understanding everything

Express experience and wisdom to the company. All you have to do is sow the educational environment with a small set of references and launch an educational career.

The custom style inherits the authorizations of the entire source and once incorporated into the agent, can implement legal users.

If you are a court office, you will explain why through the past arguments and applicable literature and provide answers and generate very express documents in your business. Or if you are an executive representative company, say, several vertical industries, you can now begin to solve these models for the vertical industry to reflect the express knowledge you know about the workflows of this industry.

It is to take this experience that you have as a business and magnify it more.

Now you can think of those agents and those executives of multiple agents who orchestrate workflows in an agent agent for the business process.

As the styles evolve faster and more capable, new samples are eliminated each and every two months, the programs will have to evolve to those integers and in scenarios that are multimodes and multiple agents. This is the great beginning now. It is not a style with just a so -called API for demand reaction. We build genuine multimodal programs with the State. And they will have to be in a position for production.

This is the motivation for the creation of a first -class applications server. Think about the foundry as a production chain for intelligence. It takes more than a style to build those agents and applications. The formula around the style, whether evaluations, this orchestration layer or this generation (generation with greater recovery), actually everything, actually.

The foundry is all this application platform for the age of AI. More than 70,000 organizations already use it in all industries.

Companies go from POC (concept evidence) to business scale implementations to unlock the AI ​​go from investment (return investment). In the beyond 3 months, we have discussed more than one hundred billion tokens, which is five times in the other.

We already take care of 1,900 models, be reaction models, reasoning models, we express to the task, multimodal, you call it.

Choosing a style can be a task. You must send your requests to the correct one quickly. We also facilitate this. Our new style of style will automatically be the most productive OpenAi style for work. More of those elections of manual styles.

You can supply a speed once in Foundry and you can use this supply of provision on several models, Grok (Elon Musk’s XAI).

Now he has a Mistral, which can even take advantage of the whole sovereign deployment in the EU region (European Union), which fits more and more in great attention for other people who build programs in the world. I think that more and more, there will be models that other people prefer in other parts.

The Agent Foundry service allows you to create declarative agents, in fact, with some lines of code only on the portal. For complex workflows, it admits the orchestration of multiple agents. You can necessarily use agent service as a controlled service. More than 10,000 organizations already use it.

We provide a complete calculation spectrum so that you can reach the correct value functionality for one of your agent’s scenarios.

We simplify it, for example, so that you can attach the foundry to the purposes of its application or container and implement any open source style in AKS, either in cloud or hybrid mode with ARC. More and more, you must have styles that are implemented on the edge. The foundry will be this. We turn the hole and the loop between the foundry and the studio of Co -nursing.

You can now take a style . . . after training in Foundry, then leave it in Copilot Studio so that you can now use this post -formulated post style to automate a work flow or create an agent.

The other vital attention for an application server is observability. That is why we now have new characteristics of observability to reach the foundry to monitor and manage AI in production. It can stick to the impact, quality, protection and position in the same place.

In the future, we who each organization is going so that other people and agents paint together. This means that the systems that you are, today, using omnipresent, for things such as identity, control of completion points, security, are now larger for agents. This is a big problem. You need the same rails that it uses today on a giant scale for paintings among other people and agents.

With the identifier entered, the agents now unload their own identity, authorization, policies and access controls. The agents built in Foundry and Copilot Studio automatically seem in a directory of agents to enter. We also associate with Servicenow and Workday to obtain an automated source and control of its agents.

With regard to knowledge governance, (Microsoft) Palise now adapts to the foundry. When an agent writes, due to the competition, he can ensure the production of final knowledge to the end. Another great security care.

On the security side, the deffinisher now adapts to the foundry. This means that their agents are also as a point of completion of threats such as wallet abuse or identity flight.

(Local Foundry) Includes a rapid and higher functionality execution time, models, agents as a service and a CLI (command line interface) for the progression of local applications. And yes, it is fully compatible in Windows and Mac.

If it is a developer, Windows is the maximum large -scale Open platform with more than one billion users and 1 billion devices it can reach.

In the year beyond the year, we have noticed that Adobe developers to use AI functions in Windows to send applications.

Windows Ai Foundry is what we use, in fact, to create features in co -pilot + PCs for things such as retirement or even click to do. All these elements are now built using the same execution time and the SDK. Now we make this platform for the life cycle of all development. Not only in Co -Plot PC, but in processors, GPU, NPU and in the cloud. So that you can create your application and pass them throughout this silicon.

Local Foundry is incorporated into Windows Ai Foundry so you can use this rich catalog of those pre-optimized open source models that you can run on your device.

With Windows AI Foundry, you can customize . . . Our incorporated SLL SLM Phi SLM (Small Language Model) LORA (Low Risk Adapters) to basically fulfill any express desire for its application.

If (OpenAi on the back of Microsoft) O1 and Deepseek have marked the beginning of the calculation, fundamentally, of inference or verification time in the cloud, I believe that the phi silica will revolutionize absolutely how inference will look at the PC. All of you, as developers, go to use this to create incredible experiences.

Windows will come with several incorporated MCP servers, such as registration systems, configurations, application actions.

We upload the local MCP sign that allows MCP compatible consumers to notice safe MCP servers that have been verified through us for protection functionality while keeping it in control.

(NLWEB) democate intelligence creation: for any application, any of those that shudders. Democratizes the aggregation of this intelligence for any developer.

Give me a reasoning style in NLWEB and I can give a goal and start composing the reasoning style to pass and synthesize through this distributed intelligence.

What is research, which is a -, all things replace absolutely in terms of construction. This is a platform that we need to create together. I think anything will leave that.

The 10, 15 years, I know, have been in the force of the aggregator. I think anything big will change.

We integrate (azure) dB cosmos directly into the foundry. Any agent can buy and things like the history of conversation. They can soon use cosmos for all their fabric application needs.

Now, an internal postgressql consultation can have integrated LLM answers. You can have a herbal and SQL language together.

(Fabric) is in the center of our pile of knowledge and analysis. The fabric brings in combination all its knowledge, all its workloads in combination in this unified experience. The past fall, we put SQL in fabric.

We also bring DB cosmos in fabric. IA programs want more than structured data. They want semi -structured data, text, images, audio.

With the cosmos and the cloth and its knowledge that will be will have immediately along SQL, you can unify the entire knowledge box and prepare them for AI.

We even build our virtual twin manufacturer on the fabric. You can now take virtual twins without code without code, code.

You can map the knowledge of your very rapid physical assets and systems.

(Co -pilot in Power BI) to chat with all your data. You can ask questions about the data, explore visually, analyze them in several BI power reports and semantic models.

This agent must be taken in Microsoft 365. The strength of all the paintings you have made: build semantic models, build those BI panels, and now to be able to put models of reasoning on a cat interface, think about what the game will change.

As a developer, he finds this vintage optimization challenge between providing the most productive AI in terms of functionality and latency, and then, of course, he will have to deliver it with monotonous relief costs.

That is why we adopt a formula technique that operates the industry to optimize the complete battery, be it the knowledge center, silicon, formula software or the application server. The collection of a formula and optimization and use of software force.

Our purpose is to offer the lowest infrastructure and (the next generation of workloads of AI. Everything is reduced to offering the maximum token through watts through the dollar. It is a kind of final equation for us.

The largest supercomputer in GB200 will be Azure. We are extremely cheerful to climb and achieve it available to all of you as developers.

Now we have more than 70 regions of the knowledge center. More than any supplier. Only in the beyond 3 months, we have opened 10 knowledge centers between countries and continents.

We build a complete AI system. This means that it includes cooling to respond to requests for workloads AI. With Maia, we have brought and designed this water fluid cooling unit that also admits GB two hundred in this closed mode so that it can consume 0 water.

On the network side, our most recent knowledge centers built for AIs have more optical fibers that we add in Azure before last year.

We attach our CCS with this 400 -year -old spine.

Whether an inference or needs, when distributed, AI WAN connects this average awareness.

Last October, we present Cobalt, our arm -based virtual machines. They already feed a giant component of our own workloads at the giant scale. The race in him. The defender works there.

Leave a Comment

Your email address will not be published. Required fields are marked *