This post was originally published on this site

Last year, we announced the Google Gen AI SDK as the new unified library for Gemini on Google AI (via the Gemini Developer API) and Vertex AI (via the Vertex AI API). At the time, it was only a Python SDK. Since then, the team has been busy adding support for Go, Node.js, and Java but my favorite language, C#, was missing until now. 

Today, I’m happy to announce that we now have a Google Gen AI .NET SDK! This SDK enables C#/.NET developers use Gemini from Google AI or Vertex AI with a single unified library.

Let’s take a look at the details. 

Installation

To install the library, run the following command in your .NET project directory:

code_block
<ListValue: [StructValue([('code', 'dotnet add package Google.GenAI'), ('language', ''), ('caption', )])]>

Import it to your code as follows:

code_block
<ListValue: [StructValue([('code', 'using Google.GenAI;'), ('language', ''), ('caption', )])]>

Create a client

First, you need to create a client to talk to Gemini. 

You can target Gemini on Google AI (via the Gemini Developer API):

code_block
<ListValue: [StructValue([('code', 'using Google.GenAI;rnrn// Gemini Developer APIrnvar client = new Client(apiKey: apiKey);'), ('language', ''), ('caption', )])]>

Or you can target Gemini on Vertex AI (via the Vertex AI API):

code_block
<ListValue: [StructValue([('code', '// Vertex AI APIrnvar client = new Client(rn project: project, location: location, vertexAI: truern)'), ('language', ''), ('caption', )])]>

Generate text

Once you have the client, you can generate text with a unary response:

code_block
<ListValue: [StructValue([('code', 'var response = await client.Models.GenerateContentAsync(rn model: "gemini-2.0-flash", contents: "why is the sky blue?"rn );rnConsole.WriteLine(response.Candidates[0].Content.Parts[0].Text);'), ('language', ''), ('caption', )])]>

You can also generate text with a streaming response:

code_block
<ListValue: [StructValue([('code', 'await foreach (var chunk in client.Models.GenerateContentStreamAsync(rn model: "gemini-2.0-flash",rn contents: "why is the sky blue?"rn )) {rn Console.WriteLine(chunk.Candidates[0].Content.Parts[0].Text);rn }'), ('language', ''), ('caption', )])]>

Generate image

Generating images is also straightforward with the new library:

code_block
<ListValue: [StructValue([('code', 'var response = await client.Models.GenerateImagesAsync(rn model: "imagen-3.0-generate-002",rn prompt: "Red skateboard"rn);rnrn// Save the image to a filernvar image = response.GeneratedImages.First().Image;rnawait File.WriteAllBytesAsync("skateboard.jpg", image.ImageBytes);'), ('language', ''), ('caption', )])]>

skateboard

Configuration

Of course, all of the text and image generation is highly configurable. 

For example, you can define a response schema and a generation configuration with system instructions and other settings for text generation as follows:

code_block
<ListValue: [StructValue([('code', '// Define a response schemarnSchema countryInfo = new()rn{rn Properties = new Dictionary {rn {rn “name”, new Schema { Type = Type.STRING, Title = “Name” }rn },rn {rn “population”, new Schema { Type = Type.INTEGER, Title = “Population” }rn },rn {rn “capital”, new Schema { Type = Type.STRING, Title = “Capital” }rn },rn {rn “language”, new Schema { Type = Type.STRING, Title = “Language” }rn }rn },rn PropertyOrdering = [“name”, “population”, “capital”, “language”],rn Required = [“name”, “population”, “capital”, “language”],rn Title = “CountryInfo”,rn Type = Type.OBJECTrn};rnrn// Define a generation configrnGenerateContentConfig config = new()rn{rn ResponseSchema = countryInfo,rn ResponseMimeType = “application/json”,rn SystemInstruction = new Contentrn {rn Parts = [rn new Part {Text = “Only answer questions on countries. For everything else, say I don’t know.”}rn ]rn },rn MaxOutputTokens = 1024,rn Temperature = 0.1,rn TopP = 0.8,rn TopK = 40,rn};rnrnvar response = await client.Models.GenerateContentAsync(rn model: “gemini-2.0-flash”,rn contents: “Give me information about Cyprus”,rn config: config);’), (‘language’, ”), (‘caption’, )])]>

Similarly, you can specify image generation configuration:

code_block
<ListValue: [StructValue([('code', 'GenerateImagesConfig config = new()rn{rn NumberOfImages = 1,rn AspectRatio = "1:1",rn SafetyFilterLevel = SafetyFilterLevel.BLOCK_LOW_AND_ABOVE,rn PersonGeneration = PersonGeneration.DONT_ALLOW,rn IncludeSafetyAttributes = true,rn IncludeRaiReason = true,rn OutputMimeType = "image/jpeg",rn};rnrnvar response = await client.Models.GenerateImagesAsync(rn model: "imagen-3.0-generate-002",rn prompt: "Red skateboard",rn config: configrn);'), ('language', ''), ('caption', )])]>

Conclusion

In this blog post, we introduced the Google Gen AI .NET SDK as the new unified SDK to talk to Gemini.

Here are some links to learn more: