Microsoft Build 2025: CEO Nadella takes the platform, the “Agent” website “

“The big winners will be other people like you who are going to build apps, not just other people who are building platforms like us,” says Satya Nadella, CEO of Microsoft.

Microsoft CEO Satya Nadella said that AI replatforming is “coming into the middle sleeves” and presented her company’s systems-based strategy for assembling each and every layer of the AI stack as the winning technique for democratizing the technology.

“We actually adopt a systems technique, a platform technique, which can expect from Microsoft, in each layer of battery,” Nadella said his main speech in Microsoft Build 2025. “The big winners will be other people like you who believe applications, not only other people who create platforms like us.

Nadella shared the achievements and of Redmond Cloud and AI provider, washed.

[Related: Microsoft Build 2025: The Most Important News in AI, Agents, Windows]

Microsoft’s GitHub and Github Co-pilot enable an open ecosystem for software progression in the AI era, Nadella said. Microsoft 365 Copilot, Copilot Studio, and Teams enable business and professional process agents. The foundry allows the application of AI and the structure of the agents with all the data. And all this applies the rails, identity control and mandatory security for companies.

These teams and more on the Microsoft Stack will build what Nadella called an open, scalable agent network, speaking to AI assistant roles because, perhaps, the next major interface for the web and for making paintings.

“You can ask questions and AI assistants give us answers,” Nadella said. “You can assign responsibilities to agents and have them executed. Or AI side-by-side paintings to entire jobs and projects. And you can combine and adjust all of those form factors. “

Here’s more than what Nadella had to say in his speech, edited for length and clarity.

Visual Studio and Family now have more than 50 million users. Github has 150 million users. GitHub’s Copilot, in fact, used through more than 15 million developers. And we’re just getting started.

As Github’s co -pilot has evolved the internal code vs, the AI ​​has so central in the way in which cod. And that is why we are a co -open supply co -pile in Code vs.

We’ll integrate those fed functions into the core of the VS code, bringing them into the same open-source repository (repository) that powers the world’s ultimate progression tool.

We will also continue to build Copilot Github. In fact, in recent years, we have approved code supplements to speak with multiple adjustments and now agents.

This same trend emerges more widely on the agent’s website. You can ask questions, and AI assistants give us answers. You can assign responsibilities to agents and have them run. Or AI side-by-side paintings to entire jobs and projects. And you can combine and adjust all of those form factors.

We build the modernization of programs in agent mode. Copilot is now to update frames like . . . Array net 6 Toarraynet nne and migrate any local application to the cloud.

Create a plan for your code, dependencies, suggest corrections on the road, learn the adjustments that you make and make the entire procedure transparent.

Think about one of the pain problems for one of us: wake up at night to deal with a live problem.

The agent of the Engineering Reliability Site (SRE) (AI) begins the triage, the root bond, the mitigation and then keeps the incident control report as a github factor with all the fixing elements.

And from there, you can even assign attachment pieces to GitHub’s copilot.

(The complete coding agent of Github Copilot carries the co -pilot of the pair of programmers to a homologous programmer.

You can assign co -pilot disorders. Correction of errors, new characteristics, code, and will end these responsibilities independently.

Establish a branch. He . . . creates a virtual device Github actions. It includes a public relations allocation (traction application) to consultation newspapers. In fact, you can return to the consultation newspapers and continue seeing the total task of PRS while it works.

The coding agent adheres to all security measures while offering a wonderful developer experience.

Use MCP servers (model context protocol) configured through a developer. We can take other agents to make code reviews and keep other people in the loop before executing a CD / CD (integration without stopping / delivery without stopping) or fusion.

I do not believe that since the launch of the teams, we have had an update of this level. Bring combined cats, research, notebooks, creation and agents in this intuitive scaffolding.

I say that this is the user interface (user interface) for AI. And the cat, for example, is based on the knowledge of the Internet, as well as the knowledge of its paintings. This adjusts the situation, specifically with the pages (co -pilot).

The search works in all its applications, confluence or Google Drive or Jira or Servicenow. Not only M365 knowledge. With the notebooks, I can now create those heterogeneous knowledge collections. GOOD. In fact, I can have cats and pages and all documents, emails, all in this collection. I can draw all those audio or podcast reviews.

I can use Create a PowerPoint in a new explainer video or generate an image. And in terms of agents, we have a couple of special agents like the investigator.

This has been the biggest game change for me, since it synthesizes on the Internet and in corporate sources, applying a deep chain of ideas for any topic or project.

The analyst passes raw knowledge about the files of origin: I can load a lot of Excel files. You will get information, you will make predictions, you will do all views. In the end, all agents are at hand.

We are at that age when we are going to put it within our reach.

We present a new elegance of corporate agents that can create adjusted knowledge in knowledge, workflows and the taste of your business. We call it co -pilot adjustment.

It is a consultation of paying the co -piloto for the client, the company, the company. Copilot can now inform the exclusive tone and language of your business. And soon it will happen more, understanding everything

Express experience and wisdom to the company. All you have to do is sow the educational environment with a small set of references and launch an educational career.

The custom style inherits the authorizations of the entire source and once incorporated into the agent, can implement legal users.

If you are a court office, you will explain why through the past arguments and applicable literature and provide answers and generate very express documents in your business. Or if you are an executive representative company, say, several vertical industries, you can now begin to solve these models for the vertical industry to reflect the express knowledge you know about the workflows of this industry.

It’s about taking what you have as corporate and amplify it even more.

Now you can think of those agents and those executives of multiple agents who orchestrate workflows in an agent agent for the business process.

As the styles evolve faster and more capable, new samples are eliminated each and every two months, the programs will have to evolve to those integers and in scenarios that are multimodes and multiple agents. This is the great beginning now. It is not a style with just a so -called API for demand reaction. We build genuine multimodal programs with the State. And they will have to be in a position for production.

This is the motivation for the creation of a first -class applications server. Think about the foundry as a production chain for intelligence. They need more than a style to build those agents and applications. The formula around the style, whether evaluations, this orchestration layer or this generation (generation with greater recovery), actually everything, actually.

The foundry is all this application platform for the age of AI. More than 70,000 organizations already use it in all industries.

Companies go from POC (concept evidence) to business scale implementations to unlock the AI ​​go from investment (return investment). In the beyond 3 months, we have discussed more than one hundred billion tokens, which is five times in the other.

We already take care of 1,900 models, be reaction models, reasoning models, we express to the task, multimodal, you call it.

Choosing a style can be a task. You must send your requests to the correct one quickly. We also facilitate this. Our new style of style will automatically be the most productive OpenAi style for work. More of those elections of manual styles.

You can provide a performance once in Foundry and you can use that supply performance on the models, adding Grok (by Elon Musk’s XAI).

Now he has a Mistral, which can even take advantage of the whole sovereign deployment in the EU region (European Union), which fits more and more in great attention for other people who build programs in the world. I think that more and more, there will be models that other people prefer in other parts.

The Agent Foundry service allows you to create declarative agents, in fact, with some lines of code only on the portal. For complex workflows, it admits the orchestration of multiple agents. You can necessarily use agent service as a controlled service. More than 10,000 organizations already use it.

We provide a complete calculation spectrum so that you can reach the correct value functionality for one of your agent’s scenarios.

We simplify it, for example, so that you can attach the foundry to the purposes of its application or container and implement any open source style in AKS, either in cloud or hybrid mode with ARC. More and more, you must have styles that are implemented on the edge. The foundry will be this. We turn the hole and the loop between the foundry and the studio of Co -nursing.

Now you can take a style. . . post-training in Foundry, then leave it in Copilot Studio so you can now use this post-formulated style to automate a workflow or create an agent.

The other vital attention for an application server is observability. That is why we now have new characteristics of observability to reach the foundry to monitor and manage AI in production. It can stick to the impact, quality, protection and position in the same place.

In the future, we who each organization is going so that other people and agents paint together. This means that the systems that you are, today, using omnipresent, for things such as identity, control of completion points, security, are now larger for agents. This is a big problem. You need the same rails that it uses today on a giant scale for paintings among other people and agents.

With the identifier entered, the agents now unload their own identity, authorization, policies and access controls. The agents built in Foundry and Copilot Studio automatically seem in a directory of agents to enter. We also associate with Servicenow and Workday to obtain an automated source and control of its agents.

With regard to knowledge governance, (Microsoft) Palise now adapts to the foundry. When an agent writes, due to the competition, he can ensure the production of final knowledge to the end. Another great security care.

On the security side, the deffinisher now adapts to the foundry. This means that their agents are also as a point of completion of threats such as wallet abuse or identity flight.

(Local Foundry) Includes a rapid and higher functionality execution time, models, agents as a service and a CLI (command line interface) for the progression of local applications. And yes, it is fully compatible in Windows and Mac.

If it is a developer, Windows is the maximum open platform with a large scale with more than one billion users and one billion devices that can be achieved.

In the year beyond the year, we have noticed that Adobe developers to use AI functions in Windows to send applications.

Windows Ai Foundry is what we use, in fact, to create features in co -pilot + PCs for things such as retirement or even click to do. All these elements are now built using the same execution time and the SDK. Now we make this platform for the life cycle of all development. Not only in Co -Plot PC, but in processors, GPU, NPU and in the cloud. So that you can create your application and pass them throughout this silicon.

Local Foundry is incorporated into Windows Ai Foundry so you can use this rich catalog of those pre-optimized open source models that you can run on your device.

With Windows AI Foundry, you can customize . . . Our incorporated SLL SLM Phi SLM (Small Language Model) LORA (Low Risk Adapters) to basically fulfill any express desire for its application.

Yes (OpenAi on the back of Microsoft) O1 and Deepseek marked the beginning of the calculation, fundamentally, of inference or cloud verification time, I believe that the phi silica will revolutionize absolutely what the calculation of inference on the PC will look at. All of you, as developers, go to use this to create incredible experiences.

Windows will come with several built-in MCP servers, such as logging systems, configurations, application actions.

We upload the local MCP sign that allows MCP compatible consumers to notice safe MCP servers that have been verified through us for protection functionality while keeping it in control.

(NLWEB) democate intelligence creation: for any application, any of those that shudders. Democratizes the aggregation of this intelligence for any developer.

Give me a reasoning style in NLWEB and I can give a goal and start composing the reasoning style to pass and synthesize through this distributed intelligence.

What is research, which is a -, all things replace absolutely in terms of construction. This is a platform that we need to create together. I think anything will get out of that.

The 10, 15 years, I know, have been in the force of the aggregator. I think anything big will change.

We integrate (azure) dB cosmos directly into the foundry. Any agent can buy and things like the history of conversation. They can soon use cosmos for all their fabric application needs.

Now, an internal postgressql consultation can have integrated LLM answers. You can have a herbal and SQL language together.

(Fabric) is in the center of our pile of knowledge and analysis. The fabric brings in combination all its knowledge, all its workloads in combination in this unified experience. The past fall, we put SQL in fabric.

We also bring DB cosmos in fabric. IA programs want more than structured data. They want semi -structured data, text, images, audio.

With the cosmos and the cloth and its knowledge that will be will have immediately along SQL, you can unify the entire knowledge box and prepare them for AI.

We even build our virtual twin manufacturer on the fabric. You can now take virtual twins without code without code, code.

You can map the knowledge of your assets and physical systems at the speed of lightning.

(Co -pilot in Power BI) to chat with all your data. You can ask questions about the data, explore visually, analyze them in several BI power reports and semantic models.

This agent must be taken in Microsoft 365. The strength of all the paintings you have made: build semantic models, build those BI panels, and now to be able to put models of reasoning on a cat interface, think about what the game will change.

As a developer, he finds this vintage optimization challenge between providing the most productive AI in terms of functionality and latency, and then, of course, he will have to deliver it with monotonous relief costs.

That is why we adopt a formula technique that operates the industry to optimize the complete battery, be it the knowledge center, silicon, formula software or the application server. The collection of a formula and optimization and use of software force.

Our purpose is to offer the lowest infrastructure and (the next generation of workloads of AI. Everything is reduced to offering the maximum token through watts through the dollar. It is a kind of final equation for us.

The largest supercomputer in GB200 will be Azure. We are extremely cheerful to climb and achieve it available to all of you as developers.

Now we have more than 70 regions of the knowledge center. More than any supplier. Only in the beyond 3 months, we have opened 10 knowledge centers between countries and continents.

We build a complete AI system. This means that it includes cooling to respond to requests for workloads AI. With Maia, we have brought and designed this liquid cooling unit of acolytes that also admits GB two hundred in this closed circuit so that it can consume 0 water.

On the network side, our most recent knowledge centers built for AIs have more optical fibers that we add in Azure before last year.

We attach our CCS with this 400 -year -old spine.

Whether an inference or needs, when distributed, AI WAN connects this average awareness.

Last October, we present Cobalt, our arm -based virtual machines. They already feed a giant component of our own workloads at the giant scale. The race in him. The defender works there.

Leave a Comment

Your email address will not be published. Required fields are marked *