Popular Posts

June 29, 2024

What is the difference between Httpget and httppost in asp.net mvc

 

In ASP.NET MVC, HttpGet and HttpPost are attributes used to specify which HTTP method a particular action method will respond to. Here are the main differences between the two:

HttpGet

  1. Purpose: Used to retrieve data from the server.
  2. Idempotent: Typically, HttpGet requests are idempotent, meaning they do not change the server state.
  3. Usage: Commonly used for requests like fetching data, displaying web pages, etc.
  4. URL: Parameters are passed in the query string of the URL.
  5. Security: Less secure for transmitting sensitive data since data is visible in the URL.
[HttpGet]
public ActionResult Index()
{
    // Logic for handling GET request
    return View();
}

HttpPost

  1. Purpose: Used to submit data to the server.
  2. Non-idempotent: HttpPost requests can change the server state (e.g., creating or updating resources).
  3. Usage: Commonly used for form submissions, sending data to be processed, etc.
  4. URL: Parameters are passed in the request body, not in the URL.
  5. Security: More secure for transmitting sensitive data since data is not visible in the URL.
[HttpPost]
public ActionResult SubmitForm(FormCollection form)
{
    // Logic for handling POST request
    return RedirectToAction("Index");
}


[HttpPost]
public ActionResult SubmitForm(FormCollection form)
{
    // Logic for handling POST request
    return RedirectToAction("Index");
}


Key Points

  • HttpGet should be used for retrieving data without side effects.
  • HttpPost should be used when submitting data or making changes to the server's state.
  • Mixing the use of HttpGet and HttpPost inappropriately can lead to security issues or unintended side effects.

Example Scenario

For a simple form:

  • Display the form (HttpGet):
[HttpGet]
public ActionResult Create()
{
    return View();
}

  • Handle form submission (HttpPost):
[HttpPost]
public ActionResult Create(MyModel model)
{
    if (ModelState.IsValid)
    {
        // Save data to the database
        return RedirectToAction("Index");
    }
    return View(model);
}

In this example, the form is displayed with a GET request, and the data submitted through the form is handled with a POST request.


Have you worked with Docker on ASP.NET Core projects

 

Yes, I have experience working with Docker on ASP.NET Core projects. Docker allows you to containerize your applications, making it easier to deploy and manage them consistently across different environments.

Here's a step-by-step guide to containerizing an ASP.NET Core application using Docker:

Step 1: Create an ASP.NET Core Application

First, create a new ASP.NET Core web application.

dotnet new webapi -n MyAspNetCoreApp

cd MyAspNetCoreApp

Step 2: Add a Dockerfile

Create a Dockerfile in the root of your project directory. This file contains the instructions for building the Docker image.

# Use the official ASP.NET Core runtime as a base image

FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base

WORKDIR /app

EXPOSE 80


# Use the official .NET SDK image to build the app

FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build

WORKDIR /src

COPY ["MyAspNetCoreApp.csproj", "./"]

RUN dotnet restore "./MyAspNetCoreApp.csproj"

COPY . .

WORKDIR "/src/."

RUN dotnet build "MyAspNetCoreApp.csproj" -c Release -o /app/build


FROM build AS publish

RUN dotnet publish "MyAspNetCoreApp.csproj" -c Release -o /app/publish


# Use the runtime image to run the app

FROM base AS final

WORKDIR /app

COPY --from=publish /app/publish .

ENTRYPOINT ["dotnet", "MyAspNetCoreApp.dll"]


Step 3: Build the Docker Image

Build the Docker image using the docker build command. Make sure you run this command in the directory where your Dockerfile is located.


docker build -t myaspnetcoreapp .

Have you worked with Docker on ASP.NET Core projects


Step 4: Run the Docker Container

Run a container using the Docker image you just built.


docker run -d -p 8080:80 --name myaspnetcoreapp_container myaspnetcoreapp


This command will run your container in detached mode (-d), map port 80 in the container to port 8080 on your host (-p 8080:80), and name the container myaspnetcoreapp_container.

Step 5: Access Your Application

Open a web browser and navigate to http://localhost:8080. You should see your ASP.NET Core web API running inside the Docker container.

Step 6: Docker Compose (Optional)

For more complex scenarios, you might want to use Docker Compose to manage multi-container applications. Here's an example docker-compose.yml file:

version: '3.4'


services:

  myaspnetcoreapp:

    image: myaspnetcoreapp

    build:

      context: .

      dockerfile: Dockerfile

    ports:

      - "8080:80"

To use Docker Compose, run the following command:

docker-compose up

This will build the image (if it doesn't exist) and start the container as defined in the docker-compose.yml file.

Conclusion

Using Docker with ASP.NET Core allows you to package your application and its dependencies into a container, ensuring consistent behavior across different environments. This approach simplifies deployment and scaling, making it ideal for modern cloud-native applications.


What is robots txt with examples SEO

 

robots.txt is a text file webmasters create to instruct web robots (typically search engine robots) how to crawl pages on their website. The robots.txt file is part of the Robots Exclusion Standard, which specifies how to inform participating crawlers about the access permissions for certain parts of a website.

Structure and Syntax

The robots.txt file resides at the root of a website (e.g., https://www.example.com/robots.txt) and follows a specific syntax:

  1. User-agent: Specifies the robot or group of robots to which the rules apply. For example:

    • User-agent: * applies rules to all robots.
    • User-agent: Googlebot applies rules specifically to Google's crawler.
  2. Disallow: Specifies the URLs that are not to be crawled. For example:

    • Disallow: /private/ disallows crawling of all URLs under the /private/ directory.
    • Disallow: /cgi-bin/ disallows crawling of all URLs in the /cgi-bin/ directory.
  3. Allow: Optionally, specifies exceptions to the disallow rule for a specific user-agent. For example:

    • Allow: /public/page.html allows crawling of a specific page even if it's in a disallowed directory.
  4. Crawl-delay: Specifies the delay (in seconds) that robots should wait between requests to the site. For example:

    • Crawl-delay: 10 suggests a 10-second delay between successive requests.
  5. Sitemap: Specifies the location of the XML Sitemap(s) for the site. For example:

    • Sitemap: https://www.example.com/sitemap.xml informs robots of the location of the XML Sitemap file.

Example robots.txt File

Here's an example of how a robots.txt file might look for a fictional website:

User-agent: *

Disallow: /private/

Disallow: /cgi-bin/

Allow: /public/page.html

Crawl-delay: 10


User-agent: Googlebot

Disallow: /admin/

Allow: /public/page.html

User-agent: *
Disallow: /private/
Disallow: /cgi-bin/
Allow: /public/page.html
Crawl-delay: 10

User-agent: Googlebot
Disallow: /admin/
Allow: /public/page.html

Explanation

  • **User-agent: ***: Applies rules to all robots (* is a wildcard).

  • Disallow: /private/: Prevents all robots from crawling URLs under the /private/ directory.

  • Disallow: /cgi-bin/: Prevents all robots from crawling URLs under the /cgi-bin/ directory.

  • Allow: /public/page.html: Allows all robots to crawl the specific page /public/page.html, even though /public/ is otherwise disallowed.

  • Crawl-delay: 10: Suggests a 10-second delay between requests to the site for all robots.

  • User-agent: Googlebot: Applies rules specifically to Google's crawler.

  • Disallow: /admin/: Prevents Googlebot from crawling URLs under the /admin/ directory.

  • Allow: /public/page.html: Allows Googlebot to crawl /public/page.html, overriding the general Disallow rule for /public/.

Usage and Considerations

  • Location: Place the robots.txt file at the root of your website (e.g., https://www.example.com/robots.txt).
  • Syntax: Follow the exact syntax rules to ensure robots interpret your directives correctly.
  • Testing: Use tools like Google Search Console to test your robots.txt file to ensure it's correctly configured.
  • Sitemap: Include a Sitemap directive to help search engines discover your XML Sitemap(s).

Robots.txt is an essential tool for managing how search engines and other bots interact with your website, ensuring efficient crawling and indexing while protecting sensitive content.


June 27, 2024

Sql server query to find duplicates values from given column

 

To find duplicates in the name column from the TempTable1 table, you can use a SQL query that employs the COUNT() function along with GROUP BY and HAVING clauses. 


Here’s how you can do it:


SELECT name, COUNT(*) AS name_count

FROM [TempTable1]

GROUP BY name

HAVING COUNT(*) > 1;

Explanation:

  1. SELECT statement:

    • SELECT name, COUNT(*) AS name_count: This selects the name column and counts how many times each name appears in the table. The COUNT(*) function counts all rows for each name.
  2. GROUP BY clause:

    • GROUP BY name: Groups the result set by the name column. This means that the COUNT(*) function will count occurrences of each unique name.
  3. HAVING clause:

    • HAVING COUNT(*) > 1: Filters the groups to only include those where the count of name occurrences is greater than 1. This effectively filters out unique names and shows only those that appear more than once, indicating duplicates.
Sql server query to find duplicates values from given column


Result:

The query will return rows where the name column has duplicate values, along with the count of how many times each name appears. This allows you to identify and manage duplicates in your TempTable1 table.


What’s the difference between middleware and a filter in ASP.NET Core

 

 In ASP.NET Core, middleware and filters are both mechanisms that enable you to add cross-cutting concerns to the request processing pipeline, but they serve different purposes and operate at different levels within the application.

Middleware

Middleware is a component that sits between the client and the server and is used to handle requests and responses. It operates in a pipeline and can perform actions like request processing, logging, authentication, routing, etc. Middleware is executed in the order it is added to the pipeline.

Key points about middleware:

  • Middleware is added to the ASP.NET Core pipeline in Startup.cs using the Use methods (e.g., app.UseAuthentication(), app.UseRouting()).
  • Middleware can handle all requests or be specific to certain paths or conditions.
  • Middleware can modify the request or response and pass control to the next middleware or terminate the request pipeline early.
  • Examples include authentication middleware (UseAuthentication()), logging middleware, routing middleware, etc.
  • Middleware runs for every request that matches its criteria.

Filters

Filters are attributes or classes that can be applied to controller actions or to all actions in a controller, and they run before or after an action method executes. Filters allow you to implement cross-cutting concerns like logging, authorization, exception handling, etc., specifically targeted at actions or controllers.

Key points about filters:

  • Filters are attributes or classes marked with specific interfaces (IFilterMetadata or its derivatives like IActionFilter, IAsyncActionFilter, etc.) that execute code before or after an action method runs.
  • Filters are applied using attributes ([Authorize], [ServiceFilter], [TypeFilter], etc.) or added globally in Startup.cs.
  • They are used to add behaviors to action methods, such as authorization checks ([Authorize]), handling exceptions ([ExceptionHandler]), caching results ([ResponseCache]), etc.
  • Filters can be applied at different levels: globally to all actions, to all actions in a controller, or to specific actions.
  • Filters can short-circuit the action execution or modify the arguments and result of the action.
What’s the difference between middleware and a filter in ASP.NET Core


Differences Summarized

  1. Scope of Application:

    • Middleware operates at a lower level, handling requests and responses in a pipeline across the entire application or based on path criteria.
    • Filters apply to specific action methods or controllers, allowing you to add behavior before or after the action method executes.
  2. Execution Order:

    • Middleware executes in the order it is added to the pipeline and can handle all requests that match its criteria.
    • Filters execute before or after an action method executes and can modify the arguments and result of the action.
  3. Purpose:

    • Middleware primarily handles request processing, logging, routing, etc., at a lower level.
    • Filters are used for applying cross-cutting concerns specifically to action methods or controllers, such as authorization, caching, validation, etc.

In practice, both middleware and filters are powerful tools in ASP.NET Core for adding functionalities that are independent of the core logic of your application, helping to keep concerns separated and promoting reusability and maintainability.


How does ASP.NET Core handle concurrency and parallelism

 

 ASP.NET Core provides several mechanisms to handle concurrency and parallelism effectively, especially in the context of web applications where multiple requests may arrive simultaneously:

  1. Thread Safety in Controllers and Services:

    • Controllers and services in ASP.NET Core are typically designed to be stateless and thread-safe by default. This means that multiple threads can execute controller actions or service methods concurrently without causing conflicts or unexpected behavior due to shared state.
  2. Async/Await Pattern:

    • ASP.NET Core encourages the use of asynchronous programming with async and await. This allows requests to be handled concurrently without blocking threads. When an asynchronous operation is awaited, the underlying thread is returned to the thread pool, enabling it to serve other requests in the meantime.
  3. Dependency Injection (DI):

    • ASP.NET Core's built-in dependency injection system ensures that services are scoped, transient, or singleton, depending on their lifecycle needs. This helps manage concurrency by ensuring that services are instantiated appropriately and safely shared or isolated as needed.
  4. Concurrency in Entity Framework Core:

    • When using Entity Framework Core (EF Core) or other ORMs, ASP.NET Core helps manage concurrency issues such as optimistic concurrency control. This allows multiple users to access and modify the same data concurrently, with mechanisms to detect and resolve conflicts.
  5. Parallelism for CPU-bound Tasks:

    • ASP.NET Core applications can utilize parallel programming techniques for CPU-bound tasks using constructs such as Parallel.ForEach or Task.WhenAll. This is particularly useful for scenarios where computations can be parallelized to take advantage of multi-core processors.
  6. Thread Pool Management:

    • ASP.NET Core efficiently manages threads using the .NET thread pool. This ensures that threads are reused and not blocked unnecessarily, improving overall application scalability under load.
  7. Middleware and Request Pipelining:

    • ASP.NET Core middleware allows you to customize the request processing pipeline. This can include parallel execution of middleware components to handle different aspects of a request concurrently, such as authentication, logging, and application-specific logic.
  8. SignalR for Real-Time Web Applications:

    • For real-time web applications requiring high concurrency, ASP.NET Core provides SignalR. SignalR supports WebSocket-based communication, enabling bi-directional communication between the client and server efficiently, with support for scaling out to multiple servers.

In summary, ASP.NET Core is designed with concurrency and parallelism in mind, leveraging asynchronous programming, dependency injection, and efficient thread management to ensure that web applications can handle multiple concurrent requests efficiently and safely.

How does ASP.NET Core handle concurrency and parallelism


Here are some syntax examples demonstrating how ASP.NET Core handles concurrency and parallelism:

1. Async/Await Pattern


public async Task<IActionResult> GetAsync(int id)
{
    var item = await _repository.GetAsync(id); // Async method call
    return Ok(item);
}

In this example:

  • async keyword allows the method to use await.
  • await _repository.GetAsync(id) suspends the method until GetAsync(id) completes, freeing the thread to serve other requests in the meantime.

2. Dependency Injection (DI)


public class ProductService
{
    private readonly ApplicationDbContext _context;

    public ProductService(ApplicationDbContext context)
    {
        _context = context;
    }

    public async Task<IEnumerable<Product>> GetAllProductsAsync()
    {
        return await _context.Products.ToListAsync();
    }
}


public class ProductService
{
    private readonly ApplicationDbContext _context;

    public ProductService(ApplicationDbContext context)
    {
        _context = context;
    }

    public async Task<IEnumerable<Product>> GetAllProductsAsync()
    {
        return await _context.Products.ToListAsync();
    }
}


Here:

  • ApplicationDbContext is injected into the ProductService.
  • The service method GetAllProductsAsync() can be called concurrently from multiple requests safely because each request gets its own instance of ProductService.

3. Parallelism with Parallel.ForEach


public IActionResult ProcessItemsInParallel()
{
    var items = _repository.GetAllItems();
    Parallel.ForEach(items, item =>
    {
        // Process each item in parallel
        item.Process();
    });
    return Ok();
}

In this example:

  • Parallel.ForEach distributes the work of processing items across multiple threads.
  • Each item.Process() method is executed concurrently, utilizing available CPU cores efficiently.

4. Entity Framework Core (Concurrency Control)


public async Task<IActionResult> UpdateItemAsync(int id, Item updatedItem)
{
    var existingItem = await _repository.GetItemAsync(id);
    if (existingItem == null)
    {
        return NotFound();
    }

    existingItem.Name = updatedItem.Name;
    existingItem.Price = updatedItem.Price;

    try
    {
        await _context.SaveChangesAsync(); // Save changes asynchronously
    }
    catch (DbUpdateConcurrencyException ex)
    {
        // Handle concurrency conflicts
        // For example, retry logic or informing the user about the conflict
        return Conflict();
    }

    return NoContent(); // Successful update
}


Here:

  • SaveChangesAsync() in Entity Framework Core handles concurrency by detecting changes and managing database operations asynchronously.
  • DbUpdateConcurrencyException is caught to handle cases where multiple users attempt to update the same entity concurrently.

These examples illustrate how ASP.NET Core leverages async/await, DI, parallel programming techniques, and concurrency control mechanisms to handle multiple requests concurrently and efficiently manage parallel operations within web applications.



What is Core CLR With Examples

 

The Core Common Language Runtime (Core CLR) is the virtual machine component of the .NET Core framework. It is responsible for executing .NET applications and includes essential services such as garbage collection, just-in-time (JIT) compilation, and type system support. The Core CLR is designed to be cross-platform, high-performance, and lightweight, making it suitable for cloud-based and server-side applications.

Key Features of Core CLR

  1. Cross-Platform: Runs on Windows, Linux, and macOS.
  2. High Performance: Optimized for performance-critical applications.
  3. Modular and Lightweight: Only includes the necessary components, reducing the application's footprint.
  4. Open Source: Developed and maintained by the .NET Foundation and the community.

Components of Core CLR

  1. Garbage Collector (GC): Manages memory allocation and deallocation, optimizing memory usage and performance.
  2. JIT Compiler: Converts Intermediate Language (IL) code into native machine code at runtime.
  3. Type System: Provides support for defining and working with types, ensuring type safety.
  4. Threading: Manages execution threads, providing concurrency and parallelism.
  5. Interoperability: Allows integration with native code and libraries.
What is Core CLR With Examples


Example: A Simple .NET Core Application

Let's create a simple .NET Core console application to demonstrate how the Core CLR works.

  1. Create a new .NET Core project:

    Open a terminal and run the following commands:

    dotnet new console -n HelloWorldApp

    cd HelloWorldApp

  2. Modify the Program.cs file:

    Open the Program.cs file and update it with the following code:

    using System;


    namespace HelloWorldApp

    {

        class Program

        {

            static void Main(string[] args)

            {

                Console.WriteLine("Hello, World!");


                // Example of garbage collection

                for (int i = 0; i < 10; i++)

                {

                    var obj = new MyClass();

                }


                // Example of threading

                var thread = new System.Threading.Thread(() =>

                {

                    Console.WriteLine("Hello from another thread!");

                });

                thread.Start();

                thread.Join();

            }

        }


        class MyClass

        {

            // Destructor to see when the object is garbage collected

            ~MyClass()

            {

                Console.WriteLine("MyClass object is being finalized.");

            }

        }

    }

  3. Run the application:

    In the terminal, run:

    dotnet run

You should see the following output:

Hello, World!

Hello from another thread!

MyClass object is being finalized.

MyClass object is being finalized.

...


  1. This example demonstrates basic usage of the Core CLR features such as garbage collection and threading.

Explanation

  • Garbage Collection: The loop creates several instances of MyClass, which are eventually garbage collected. The destructor (~MyClass) provides a message when an object is finalized.
  • Threading: A new thread is created and started, which executes a simple lambda expression that prints a message.

Advanced Features

Core CLR also supports advanced features such as:

  • Just-In-Time (JIT) Compilation: Converts IL code to native code at runtime, optimizing execution.
  • Ahead-Of-Time (AOT) Compilation: Available through .NET Native or CoreRT, allows compiling applications directly to native code before deployment.
  • Native Interoperability (P/Invoke): Allows calling native functions from .NET code, facilitating integration with existing C/C++ libraries.

Example: P/Invoke in .NET Core

Here's a simple example of calling a native function using P/Invoke:

  1. Create a native library:

    On Windows, you might create a simple C library (native_lib.c):

    #include <stdio.h>


    __declspec(dllexport) void HelloFromNative()

    {

        printf("Hello from native code!\n");

    }

    Compile this to a DLL using a C compiler.

  2. Modify the .NET Core application to use the native library:

    Update the Program.cs file:

    using System;

    using System.Runtime.InteropServices;


    namespace HelloWorldApp

    {

        class Program

        {

            // P/Invoke declaration

            [DllImport("native_lib.dll", CallingConvention = CallingConvention.Cdecl)]

            public static extern void HelloFromNative();


            static void Main(string[] args)

            {

                Console.WriteLine("Hello, World!");


                // Call the native function

                HelloFromNative();

            }

        }

    }


  3. Run the application:

    Ensure the native library (native_lib.dll) is in the output directory and run the application:

    dotnet run

    You should see the following output:

    Hello, World!

    Hello from native code!

Conclusion

Core CLR is a powerful and flexible runtime environment for .NET applications, offering cross-platform capabilities, high performance, and a modular architecture. By understanding its components and features, developers can create efficient and robust .NET applications.

filters in ASP.NET Core with examples tutorial

 

Filters in ASP.NET Core are components that can be executed before or after certain stages in the request processing pipeline. They provide a way to encapsulate cross-cutting concerns such as logging, authentication, authorization, error handling, and more. There are several types of filters available in ASP.NET Core:

  1. Authorization Filters
  2. Resource Filters
  3. Action Filters
  4. Exception Filters
  5. Result Filters

Let's look at each type of filter with examples:

1. Authorization Filters

Authorization filters are executed first in the request processing pipeline. They are used to determine whether a user is authorized to access a resource.

public class CustomAuthorizationFilter : IAuthorizationFilter

{

    public void OnAuthorization(AuthorizationFilterContext context)

    {

        // Custom authorization logic

        if (!context.HttpContext.User.Identity.IsAuthenticated)

        {

            context.Result = new ForbidResult();

        }

    }

}


// Applying the filter globally

public void ConfigureServices(IServiceCollection services)

{

    services.AddControllersWithViews(options =>

    {

        options.Filters.Add<CustomAuthorizationFilter>();

    });

}


2. Resource Filters

Resource filters run after authorization filters and before model binding. They are useful for caching or modifying the result before action execution.

public class CustomResourceFilter : IResourceFilter

{

    public void OnResourceExecuting(ResourceExecutingContext context)

    {

        // Logic before the action executes

    }


    public void OnResourceExecuted(ResourceExecutedContext context)

    {

        // Logic after the action executes

    }

}


// Applying the filter to a specific controller

[ServiceFilter(typeof(CustomResourceFilter))]

public class HomeController : Controller

{

    public IActionResult Index()

    {

        return View();

    }

}

filters in ASP.NET Core with examples tutorial


3. Action Filters

Action filters run before and after the execution of an action method. They are useful for running code before and after an action method is called.

public class CustomActionFilter : IActionFilter

{

    public void OnActionExecuting(ActionExecutingContext context)

    {

        // Logic before the action executes

    }


    public void OnActionExecuted(ActionExecutedContext context)

    {

        // Logic after the action executes

    }

}


// Applying the filter to a specific action

public class HomeController : Controller

{

    [ServiceFilter(typeof(CustomActionFilter))]

    public IActionResult Index()

    {

        return View();

    }

}


4. Exception Filters

Exception filters are executed when an exception is thrown during the processing of an action method. They are useful for error handling and logging.

public class CustomExceptionFilter : IExceptionFilter

{

    public void OnException(ExceptionContext context)

    {

        // Custom error handling logic

        context.Result = new JsonResult(new { error = context.Exception.Message });

        context.ExceptionHandled = true;

    }

}


// Applying the filter globally

public void ConfigureServices(IServiceCollection services)

{

    services.AddControllersWithViews(options =>

    {

        options.Filters.Add<CustomExceptionFilter>();

    });

}


5. Result Filters

Result filters run before and after the execution of an action result. They are useful for modifying the result before it is sent to the client.

public class CustomResultFilter : IResultFilter

{

    public void OnResultExecuting(ResultExecutingContext context)

    {

        // Logic before the result is executed

    }


    public void OnResultExecuted(ResultExecutedContext context)

    {

        // Logic after the result is executed

    }

}


// Applying the filter to a specific controller

[ServiceFilter(typeof(CustomResultFilter))]

public class HomeController : Controller

{

    public IActionResult Index()

    {

        return View();

    }

}


Applying Filters

Filters can be applied globally, to controllers, or to action methods. Here’s how to apply them:

  • Globally: In the ConfigureServices method of Startup.cs.
  • Controller Level: Using the [ServiceFilter] or [TypeFilter] attribute on the controller.
  • Action Method Level: Using the [ServiceFilter] or [TypeFilter] attribute on the action method.

Example of Applying a Filter Globally

public void ConfigureServices(IServiceCollection services)
{
    services.AddControllersWithViews(options =>
    {
        options.Filters.Add<CustomActionFilter>();
    });
}

Example of Applying a Filter to a Controller

[ServiceFilter(typeof(CustomActionFilter))]
public class HomeController : Controller
{
    public IActionResult Index()
    {
        return View();
    }
}

Example of Applying a Filter to an Action Method

public class HomeController : Controller
{
    [ServiceFilter(typeof(CustomActionFilter))]
    public IActionResult Index()
    {
        return View();
    }
}

Summary

Filters in ASP.NET Core provide a powerful way to implement cross-cutting concerns in a clean and maintainable way. By understanding and using the different types of filters appropriately, you can ensure that your application remains modular and easy to manage.


How do you implement caching in ASP.NET Core

 

 Caching in ASP.NET Core can significantly improve application performance by storing frequently accessed data in memory. ASP.NET Core provides several options for implementing caching, ranging from simple in-memory caching to distributed caching across multiple servers. Here’s how you can implement caching in ASP.NET Core:

1. In-Memory Caching

In-memory caching is suitable for scenarios where cached data is used within a single instance of the application. It's simple to set up and does not require additional services.

Enable and Use In-Memory Caching

First, add the caching services in Startup.cs:

public void ConfigureServices(IServiceCollection services)

{

    services.AddControllersWithViews();


    // Add in-memory caching

    services.AddMemoryCache();

}


Next, use the cache in your controller or service:

using Microsoft.Extensions.Caching.Memory;


public class MyController : Controller

{

    private readonly IMemoryCache _cache;


    public MyController(IMemoryCache memoryCache)

    {

        _cache = memoryCache;

    }


    public IActionResult Index()

    {

        string cachedData;

        if (!_cache.TryGetValue("CachedDataKey", out cachedData))

        {

            // Key not in cache, so get data and set cache entry

            cachedData = GetDataFromDataSource();

            _cache.Set("CachedDataKey", cachedData, TimeSpan.FromMinutes(10)); // Cache for 10 minutes

        }


        return View(cachedData);

    }


    private string GetDataFromDataSource()

    {

        // Simulate getting data from a data source

        return "Cached data from data source";

    }

}

2. Distributed Caching (using Redis, SQL Server, etc.)

Distributed caching is suitable for scenarios where cached data needs to be shared across multiple instances of the application or multiple servers. ASP.NET Core supports various distributed cache providers like Redis, SQL Server, and others.

Use Redis for Distributed Caching

To use Redis for caching, you need to install the Microsoft.Extensions.Caching.StackExchangeRedis package and configure it in Startup.cs.


public void ConfigureServices(IServiceCollection services)

{

    services.AddControllersWithViews();


    // Add Redis distributed caching

    services.AddStackExchangeRedisCache(options =>

    {

        options.Configuration = "localhost:6379"; // Replace with your Redis connection string

        options.InstanceName = "SampleInstance"; // Optional: Redis instance name

    });

}

How do you implement caching in ASP.NET Core


Use Distributed Cache in Controller

using Microsoft.Extensions.Caching.Distributed;


public class MyController : Controller

{

    private readonly IDistributedCache _cache;


    public MyController(IDistributedCache distributedCache)

    {

        _cache = distributedCache;

    }


    public async Task<IActionResult> Index()

    {

        string cachedData;

        byte[] cachedBytes = await _cache.GetAsync("CachedDataKey");


        if (cachedBytes == null)

        {

            // Key not in cache, so get data and set cache entry

            cachedData = GetDataFromDataSource();

            cachedBytes = Encoding.UTF8.GetBytes(cachedData);

            var options = new DistributedCacheEntryOptions

            {

                AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(10) // Cache for 10 minutes

            };

            await _cache.SetAsync("CachedDataKey", cachedBytes, options);

        }

        else

        {

            cachedData = Encoding.UTF8.GetString(cachedBytes);

        }


        return View(cachedData);

    }


    private string GetDataFromDataSource()

    {

        // Simulate getting data from a data source

        return "Cached data from data source";

    }

}

3. Response Caching (for HTTP Responses)

ASP.NET Core also supports response caching to cache the output of HTTP responses, which is useful for caching entire pages or API responses.

Enable Response Caching


public void ConfigureServices(IServiceCollection services)
{
    services.AddControllersWithViews();

    // Enable response caching
    services.AddResponseCaching();
}


Use Response Caching in Controller


[ResponseCache(Duration = 60)] // Cache for 60 seconds
public IActionResult Index()
{
    return View();
}


Summary

Implementing caching in ASP.NET Core involves configuring caching services (IMemoryCache, IDistributedCache, or ResponseCache) in Startup.cs and using them appropriately in your controllers or services. The choice between in-memory caching and distributed caching depends on your application’s scalability and performance requirements.

Perl Advanced Experienced Interview Questions Answers

 
1. Difference between the variables in which chomp function work ?

Scalar: It is denoted by $ symbol. Variable can be a number or a string.

Array: Denoted by @ symbol prefix. Arrays are indexed by numbers.

The namespace for these types of variables is different. For Example: @add, $add. The scalar variables are in one table of names or namespace and it can hold single specific information at a time and array variables are in another table of names or namespace. Scalar variables can be either a number or a string

2. How can we create Perl programs in UNIX, Windows NT, Macintosh and OS/2 ?

“Emacs” or “vi” can be used in UNIX and in Windows NT we can use “notepad”. In Macintosh we can use MacPerl’s text editor or any other text editor and in OS/2, e or epm can be used

3. Create a function that is only available inside the scope where it is defined ?

$pvt = Calculation(5,5);

print("Result = $pvt\n");

sub Calculation{

my ($fstVar, $secndVar) = @_;

my $square = sub{

return($_[0] ** 2);

};

return(&$square($fstVar) + &$square($secndVar));

};


Output: Result = 50

4. How can i display all array element in which each element will display on next line in Perl ?


1
@array_declared=('ab','cd','ef','gh'); foreach (@array_declared) { print "$_\n"; }

Perl Advanced Experienced Interview Questions Answers

5. Which feature of Perl provides code reusability ? Give any example of that feature.

Inheritance feature of Perl provides code reusability. In inheritance, the child class can use the methods and property of parent class


Package Parent;

Sub foo

{

print("Inside A::foo\n");

}

package Child;

@ISA = (Parent);

package main;

Child->foo();

Child->bar();


6. In Perl we can show the warnings using some options in order to reduce or avoid the errors. What are that options?

-The -w Command-line option: It will display the list if warning messages regarding the code.

– strict pragma: It forces the user to declare all variables before they can be used using the my() function.

– Using the built-in debugger: It allows the user to scroll through the entire program line by line.

7. Write the program to process a list of numbers.

The following program would ask the user to enter numbers when executed and the average of the numbers is shown as the output:


$sum = 0;

$count = 0;

print "Enter number: ";

$num = <>;

chomp($num);

while ($num >= 0)

{

$count++;

$sum += $num;

print "Enter another number: ";

$num = <>;

chomp($num);

}

print "$count numbers were entered\n";

if ($count > 0)

{

print "The average is ",$sum/$count,"\n";

}

exit(0);


8. Does Perl have objects? If yes, then does it force you to use objects? If no, then why?

Yes, Perl has objects and it doesn’t force you to use objects. Many object oriented modules can be used without understanding objects. But if the program is too large then it is efficient for the programmer to make it object oriented.

9. Can we load binary extension dynamically?

Yes, we can load binary extension dynamically but your system supports that. If it doesn’t support, then you can statically compile the extension.

10. Write a program to concatenate the $firststring and $secondstring and result of these strings should be separated by a single space.

Syntax:

$result = $firststring . ” “.$secondstring;

Program:


1
2
3
4
5
6
7
8
9
#!/usr/bin/perl

$firststring = "abcd";

$secondstring = "efgh";

$combine = "$firststring $secondstring";

print "$Combine\n";


Output:

abcd efgh

11. How do I replace every TAB character in a file with a comma?

1
perl -pi.bak -e 's/\t/,/g' myfile.txt
12. In Perl, there are some arguments that are used frequently. What are that arguments and what do they mean?

-w (argument shows warning)

-d (use for debug)

-c (which compile only not run)

-e (which executes)

We can also use combination of these like:

-wd

13. How many types of primary data structures in Perl and what do they mean?

The scalar: It can hold one specific piece of information at a time (string, integer, or reference). It starts with dollar $ sign followed by the Perl identifier and Perl identifier can contain alphanumeric and underscores. It is not allowed to start with a digit. Arrays are simply a list of scalar variables.

Arrays: Arrays begin with @ sign. Example of array:


1
my @arrayvar = (“string a”, “string b “string c”);


Associative arrays: It also frequently called hashes, are the third major data type in Perl after scalars and arrays. Hashes are named as such because they work very similarly to a common data structure that programmers use in other languages–hash tables. However, hashes in Perl are actually a direct language supported data type.

14. Which functions in Perl allows you to include a module file or a module and what is the difference between them?

“use”

1. The method is used only for the modules (only to include .pm type file)

2. The included objects are verified at the time of compilation.

3. We don’t need to specify the file extension.

4. loads the module at compile time.

“require”

1. The method is used for both libraries and modules.

2. The included objects are verified at the run time.

3. We need to specify the file Extension.

4. Loads at run-time.

suppose we have a module file as “Module.pm”

use Module;

or

require “Module.pm”;

(will do the same)

15. How can you define “my” variables scope in Perl and how it is different from “local” variable scope?


$test = 2.3456;

{

my $test = 3;

print "In block, $test = $test ";

print "In block, $:: test = $:: test ";

}

print "Outside the block, $test = $test ";

print "Outside the block, $:: test = $::test ";


Output:

In block, $test = 3

In block, $::test = 2.3456

Outside the block, $test = 2.3456

Outside the block, $::test = 2.3456

The scope of “my” variable visibility is in the block only but if we declare one variable local then we can access that from the outside of the block also. ‘my’ creates a new variable, ‘local’ temporarily amends the value of a variable.

16. Which guidelines by Perl modules must be followed?

In Perl, the following guidelines must follow by the modules:

The file name of a module must the same as the package name.

The name of the package should always begin with a capital letter.

The entire file name should have the extension “.pm”.

In case no object oriented technique is used the package should be derived from the Exporter class.

Also if no object oriented techniques are used the module should export its functions and variables to the main namespace using the @EXPORT and @EXPOR_OK arrays (the use directive is used to load the modules).

17. How the interpreter is used in Perl?

Every Perl program must be passed through the Perl interpreter in order to execute. The first line in many Perl programs is something like:


1
#!/usr/bin/perl


The interpreter compiles the program internally into a parse tree. Any words, spaces, or marks after a pound symbol will be ignored by the program interpreter. After converting into parse tree, interpreter executes it immediately. Perl is commonly known as an interpreted language, is not strictly true. Since the interpreter actually does convert the program into byte code before executing it, it is sometimes called an interpreter/compiler. Although the compiled form is not stored as a file.

18. “The methods defined in the parent class will always override the methods defined in the base class”. What does this statement means?

The above statement is a concept of Polymorphism in Perl. To clarify the statement, let’s take an example:

[perl]
package X;

sub foo

{

print("Inside X::foo\n");

}

package Z;

@ISA = (X);

sub foo

{

print("Inside Z::foo\n");

}

package main;

Z->foo();
[/perl]


This program displays:

Inside Z::foo

– In the above example, the foo() method defined in class Z class overrides the inheritance from class X. Polymorphism is mainly used to add or extend the functionality of an existing class without reprogramming the whole class.

19. For a situation in programming, how can you determine that Perl is a suitable?

If you need faster execution the Perl will provide you that requirement. There a lot of flexibility in programming if you want to develop a web based application. We do not need to buy the license for Perl because it is free. We can use CPAN (Comprehensive Perl Archive Network), which is one of the largest repositories of free code in the world.

20. Write syntax to add two arrays together in perl?

1
@arrayvar = (@array1,@array2);

To accomplish the same, we can also use the push function.

21. How many types of operators are used in the Perl?

Arithmetic operators

+, – ,*

Assignment operators:

+= , -+, *=

Increment/ decrement operators:

++, —

String concatenation:

‘.’ operator

comparison operators:

==, !=, >, < , >=

Logical operators:

&&, ||, !

22. If you want to empty an array then how would you do that?

We can empty an array by setting its length to any –ve number, generally -1 and by assigning null list

use strict;

use warnings;

my @checkarray;

if (@checkarray)

{

print "Array is not empty";

}

else

{

print "Array is empty";

}

23. Where the command line arguments are stored and if you want to read command-line arguments with Perl, how would you do that?

The command line arguments in Perl are stored in an array @ARGV.

$ARGV[0] (the first argument)

$ARGV[1] (the second argument) and so on.

$#ARGV is the subscript of the last element of the @ARGV array, so the number of arguments on the command line is $#ARGV + 1

24. Suppose an array contains @arraycontent=(‘ab’, ‘cd’, ‘ef’, ‘gh’). How to print all the contents of the given array?

@arraycontent=(‘ab’, ‘cd’, ‘ef’, ‘gh’)

foreach (@arraycontent)

{

print "$_\n";

}

25. What is the use of -w, -t and strict in Perl?

When we use –w, it gives warnings about the possible interpretation errors in the script.

Strict tells Perl to force checks on the definition and usage of variables. This can be invoked using the use strict command. If there are any unsafe or ambiguous commands in the script, this pragma stops the execution of the script instead of just giving warnings.

When used –t, it switches on taint checking. It forces Perl to check the origin of variables where outside variables cannot be used in sub shell executions and system calls

26. Write a program to download the contents from www.perlinterview.com/answers.php website in Perl.

#!/usr/bin/perl

use strict;

use warnings;

use LWP::Simple;

my $siteurl = 'www.perlinterview.com/answers.php';

my $savefile = 'content.kml';

getstore($siteurl, $savefile);

27. Which has the highest precedence, List or Terms? Explain?

Terms have the highest precedence in Perl. Terms include variables, quotes, expressions in parenthesis etc. List operators have the same level of precedence as terms. Specifically, these operators have very strong left word precedence.

28. List the data types that Perl can handle?

Scalars ($): It stores a single value.

Arrays (@): It stores a list of scalar values.

Hashes (%): It stores associative arrays which use a key value as index instead of numerical indexes

29. Write syntax to use grep function?

1
2
3
grep BLOCK LIST

grep (EXPR, LIST)

30. What is the use of -n and -p options?

The -n and -p options are used to wrap scripts inside loops. The -n option makes the Perl execute the script inside the loop. The -p option also used the same loop as -n loop but in addition to it, it uses continue. If both the -n and -p options are used together the -p option is given the preference.

31. What is the usage of -i and 0s options?

The -i option is used to modify the files in-place. This implies that Perl will rename the input file automatically and the output file is opened using the original name. If the -i option is used alone then no backup of the file would be created. Instead -i.bak causes the option to create a backup of the file.

32. Write a program that explains the symbolic table clearly.

In Perl, the symbol table is a hash that contains the list of all the names defined in a namespace and it contains all the functions and variables. For example:

sub Symbols

{

my($hashRef) = shift;

my(%sym);

my(@sym);

%sym = %{$hashRef};


@sym = sort(keys(%sym));

foreach (@sym)

{

printf("%-10.10s| %s\n", $_, $sym{$_});

}

}

Symbols(\%Foo::);

package Foo;

$bar = 2;

sub baz {

$bar++;

}

33. How can you use Perl warnings and what is the importance to use them?

The Perl warnings are those in which Perl checks the quality of the code that you have produced. Mandatory warnings highlight problems in the lexical analysis stage. Optional warnings highlight cases of possible anomaly.

use warnings; # it is same as importing "all"

no warnings; # it is same as unimporting "all"

use warnings::register;

if (warnings::enabled()) {

warnings::warn("any warning");

}

if (warnings::enabled("void")) {

warnings::warn("void", "any warning");

}

34. Which statement has an initialization, condition check and increment expressions in its body? Write a syntax to use that statement.


for ($count = 10; $count >= 1; $count--)

{

print "$count ";

}

35. How can you replace the characters from a string and save the number of replacements?

#!usr/bin/perl

use strict;

use warnings;

my $string="APerlAReplAFunction";

my $counter = ($string =~ tr/A//);

print "There are $counter As in the given string\n";

print $string;

36. Remove the duplicate data from @array=(“perl”,”php”,”perl”,”asp”)

sub uniqueentr

{

return keys %{{ map { $_ => 1 } @_ }};

}

@array = ("perl","php","perl","asp”);

print join(" ", @array), "\n";

print join(" ", uniqueentr(@array)), "\n";


37. How can information be put into hashes?

When a hash value is referenced, it is not created. It is only created once a value is assigned to it. The contents of a hash have no literal representation. In case the hash is to be filled at once the unwinding of the hash must be done. The unwinding of hash means the key value pairs in hash can be created using a list, they can be converted from that as well. In this conversion process the even numbered items are placed on the right and are known as values. The items placed on the left are odd numbered and are stored as keys. The hash has no defined internal ordering and hence the user should not rely on any particular ordering.

Example of creating hash:

%birthdate = ( Ram => "01-01-1985",


Vinod => "22-12-1983",

Sahil => "13-03-1989",

Sony => "11-09-1991");


38. Why Perl aliases are considered to be faster than references?

In Perl, aliases are considered to be faster than references because they do not require any dereferencing.

39. How can memory be managed in Perl?

Whenever a variable is used in Perl, it occupies some memory space. Since the computer has limited memory the user must be careful of the memory being used by the program. For Example:

use strict;

open(IN,"in");

my @lines = <IN>

close(IN);

open(OUT,">out");

foreach (@lines)

{

print OUT m/([^\s]+)/,"\n";

}

close(OUT);

On execution of above program, after reading a file it will print the first word of each line into another file. If the files are too large then the system would run out of memory. To avoid this, the file can be divided into sections.

40. How can you create anonymous subroutines?

sub BLOCK

sub PROTO BLOCK

sub ATTRS BLOCK

sub PROTO ATTRS BLOCK


41. What do you mean by context of a subroutine?

It is defined as the type of return value that is expected. You can use a single function that returns different values.

42. List the prefix dereferencer in Perl.

$-Scalar variables

%-Hash variables

@-arrays

&-subroutines

Type globs-*myvar stands for @myvar, %myvar.

43. In CPAN module, name an instance you use.

In CPAN, the CGI and DBI are very common packages

44. What are the advantages of c over Perl?

There are more development tools for C than for PERL. PERL execute slower than C programs. Perl appears to be an interpreted language but the code is complied on the fly. If you don’t want others to use your Perl code you need to hide your code somehow unlike in C. Without additional tools it is impossible to create an executable of a Perl program

45. “Perl regular expressions match the longest string possible”. What is the name of this match?

It is called as “greedy match” because Perl regular expressions normally match the longest string possible.

46. How can you call a subroutine and identify a subroutine?

‘&myvariable’ is used to call a sub-routine and ‘&’ is used to identify a sub-routine.

47. What is use of ‘->’ symbol?

In Perl, ‘->’ symbol is an infix dereference operator. if the right hand side is an array subscript, hash key or a subroutine, then the left hand side must be a reference.

@array = qw/ abcde/; # array

print "n",$array->[0]; # it is wrong

print "n",$array[0]; #it is correct , @array is an array

48. Where do we require ‘chomp’ and what does it mean?

We can eliminate the new line character by using ‘chomp’. It can used in many different scenarios.For example:

excuteScript.pl FstArgu.

$argu = $ARGV[0];

chomp $argu; --> to get rid of the carrige return.


49. What does the’$_’ symbol mean?

The ‘$_’ is a default variable in Perl and $_ is known as the “default input and pattern matching space

50. What interface used in PERL to connect to database? How do you connect to database in Perl?

We can connect to database using DBI module in Perl.


1
2
3
use DBI;

my $dbh = DBI->connect(’dbi:Oracle:orcl’, ‘username’, ‘password’,)

51) What is PERL? What is the basic command to print a String in Perl?

Perl is a programming language designed for text processing. It runs on various platforms Windows, Mac OS and various versions of UNIX.

To print a string, you need to observe the following –

A string should have double Quotesà “…..”
A special character referred as “new line” à \n
A semicolon at the end of the program à ;

Example: print “Hello, World!\n”;

52) List the operator used in Perl?

Operators used in Perl are

String Concatenation à ‘.’
Comparison Operators à ==, !=, >,< , >=
Logical Operators à &&, ll , !
Assignment Operators à + = ,- + , *=
Increment and decrement Operators à ++ ,-
Arithmetic Operators à +, – ,*

53) Explain which feature of PERL provides code reusability?

To provide code re-usability in PERL inheritance feature is used. In Inheritance, the child class can use the methods and property of the parent class.

54) Mention the difference between die and exit in Perl?

Die will print a message to the std err before ending the program while Exit will simply end up the program.

55) In Perl, what is grep function used for?

To filter the list and return only those elements that match certain criteria Perl grep function is used.

56) What is the syntax used in Perl grep function?

The syntax used in Perl is

a) grep BlOCK LIST

b) grep ( EXPR, LIST )

BLOCK: It contains one or more statements delimited by braces, the last statement determines in the block whether the block will be evaluated true or false.
EXPR: It represents any expression that supports $, particularly a regular expression. Against each element of the list, expression is applied, and if the result of the evaluation is true, the current element will be attached to the returned list
LIST: It is a list of elements or an array

57) Explain what is the scalar data and scalar variables in Perl?

Scalar in Perl means a single entity like a number or string. So, the Java concept of int, float, double and string equals to perls scalar and the numbers and strings are exchangeable. While scalar variable is used to store scalar data. It uses $ sign and followed by one or more alphanumeric characters or underscore. It is a case sensitive.

58) What does -> symbol indicates in Perl?

In Perl, the arrow – > symbol is used to create or access a particular object of a class.

59) Mention how many ways you can express string in Perl?

You can express string in Perl in many ways

For instance “this is guru99.”

qq/this is guru99 like double quoted string/
qq^this is guru99 like double quoted string^
q/this is guru99/
q&this is guru99&
q(this is guru99)

60) Explain USE and REQUIREMENT statements?

REQUIRE statement: It is used to import functions with a global scope such that their objects and functions can be accessed directly

Example: Require Module,

Var=module::method(); //method called with the module reference

USE statements are interpreted and are executed during parsing, while during run time the require statements are executed.

Example: Use Module

Var=method(); //method can be called directly

61) Explain what is Chop & Chomp function does?

Chop function eliminates the last character from expr, each element of the list
Chomp function eliminates the last character from an expr or each element of the list if it matches the value of $/. It is considered better than chop as it only removes the character if there is a match.

62) Mention what is CPAN?

CPAN means Comprehensive Perl Archive Network, a large collection of Perl software and documentation.

63) Explain what is Polymorphism in Perl?

In Perl, Polymorphism means the methods defined in the base class will always over-ride the methods defined in the parent class.

64) Mention what are the two ways to get private values inside a subroutine or block?

There are two ways through which private values can be obtained inside a subroutine or block

Local Operator: On global variables only this operator can operate. The value of the private variable is saved on the Local Operator and makes the provision to restore them at the end of the block
My Operator: To define or create a new variable this operator can be used. Variable that is created by My Operator will always be declared private to block inside which it is defined.

65) Explain what is STDIN, STDOUT and STDERR?

STDIN: The STDIN file handle is used to read from the keyboard
STDOUT: It is used to write into the screen or another program
STDERR: It is also used to write into a screen. STDERR is a standard error stream that is used in Perl.

66) What is the closure in PERL?

The closure is a block of code that is used to capture the environment where it is defined. It particularly captures any lexical variables that block consists of and uses in an outer space.

67) Explain what is Perl one liner?

One liner is one command line programs and can be executed from the command line immediately.

For example,

# run program under the debugger

perl-d my_file

68) Explain what is Ivalue?

An Ivalue is a scalar value which can be used to store the result of any expression. Usually, it appears at the left-hand side of the expression and represent a data space in memory.

69) What is Grooving & Shortening of arrays and what is Splicing of arrays?

Grooving & Shortening of arrays can be executed directly by giving a non-existent index to which Perl automatically adjust the array size

Splicing of arrays copies and removes or replaces elements from an array using the position identified in the spliced function instead of extracting into another array

70) Explain what is the function that is used to identify how many characters are there in a string?

To tell how many characters are there in a string, length () function is used.

71) Explain what are prefix dereferencer and list them out?

Using a particular prefix when you dereference a variable, they are called prefix dereferencer.

$- Scalar variables
%-Hash variables
@-Arrays
&-Subroutines
Type globs-*myvar stands for @myvar, %myvar


72) Explain what is the function of Return Value?

The Return Value function returns a reference to an object blessed into CLASSNAME.


JMS Advanced Experienced Freshers Interview Questions

 
What can be the likely cause of the “javax.naming.NameNotFoundException: MyQueueConnectionFactory” not found error?

The error detail of the above error is as follows:
Initiating login ...
Binding name:`java:comp/env/jms/QueueName `
Binding name:`java:comp/env/jms/MyQueueConnectionFactory `
JNDI lookup failed: javax.naming.NameNotFoundException: MyQueueConnectionFactory not found
Unbinding name:`java:comp/env/jms/QueueName `
Unbinding name:`java:comp/env/jms/MyQueueConnectionFactory `

If the user notices properly there are extra spaces after the queue name and the myqueueconnectionfactory quotes. The deploytool regards any extra space defined by the user as a part of the name. Due to this the deploytool is not able to take the input properly.
To rectify this problem the user needs to make use of the Resource Refs tabbed pane to delete the extra space. Once the extra spaces are deleted and the application is saved upon redeployment this error will not occur anymore.

What can be done to map the javax.jms.message & javax.mail.message?

The following steps can be performed to map the javax.jms.message with javax.mail.message:
Mapping to javamail domain from jms domain (To receive a javax.jms.Message):
- A JMS topic/queue is associated to multiple email id`s.

- Mapping the JMS Message Header to ‘custom’ JavaMail Message Header.

- Associating the JMS Message Body with the JavaMail Message body.

- The JavaMail client application can process these ‘custom’ headers and the content of the message body.
Mapping to jms domain from javamail(To receive a javax.mail.Message):

- An e-mail id can be associated with multiple JMS topics/queues.

- Mapping the JavaMail Message Header to a ‘custom’ JMS Message Header.

- Associating the JavaMail Message Body with the JMS Message body.

- The JMS client application will process these ‘custom’ headers and the content of the message body.

What is a MOM in reference to JMS?

Software components in a distributed computing network often need to pass messages between them. These are to be done asynchronously. The MOM or the message oriented middleware is a software that is placed between any two communicating components. A middleware is a component that is placed between the client and the server. The MOM provides the facility of message passing by using the technique of queuing. All the messages are stored in the form of queue till the client that requested it can read it. The basis of reading messages by the client can be by FIFO or priority. By using the concept of queuing the software components can work independently of time.

Describe briefly the components of the Java Messaging Service.

The JMS application comprises of the following components:

- JMS provider: This is the main messaging system which provides with the JMS interfaces. IT also provides with the panel to control administrative and control features.

- JMS clients: These comprise the programs and components which are written using the Java programming language. They are responsible for the production and consumption of messages.

- Messages: These are the objects that are used to communicate between two clients.

- Administered objects: These are predefined objects created by the administrator which can be used by the clients.

- Native clients: These are programs that do not use the JSM API client but instead uses its own native API client.

What is the reason of getting the “ Unable to get the internal JNDI context error “?

This type of error occurs when the user has incorrectly specified the client properties file. A good example of this is the user might have used the windows syntax in a unix system environment. It can also be caused when the correct path to the jms_client.properties file is not correct. The user also has to define the path to the config directory properly.

The error detail is shown as follows:

java -Djms.properties=%J2EE_HOME%\config\jms_client.properties SimpleQueueSender MyQueue 3
Queue name is MyQueue

SEVERE JMSInitialContext: Unable to get internal JNDI context because:
javax.naming.CommunicationException: Cannot connect to ORB [Root exception is
org.omg.CORBA.COMM_FAILURE: minor code: 1398079689 completed: No]
This error can be rectified by the user setting the classpath for the client application correctly.

What do you understand by JMS messaging domain?

The JMS domain provides an approach to messaging.

The JMS provides a user with two types of domains:

- Publish/subscribe domain
- Point to point domain

The point to point model enables the JMS clients to send and receive messages via virtual channels known as queues. The message generators are the senders and the message consumers are the receivers. They can be both synchronous and asynchronous. In the point to point model the messages are passed via the virtual channel known as topics. The message producers are called publishers whereas the receivers are known as subscribers.
The JMS specification is responsible for providing the settings and other restrictions for both the types of domains.

Under what arguments in non-transacted sessions are messages acknowledged?

Messages are acknowledged in non-transacted sessions on the basis of the second argument for the createQueueSession ot the createTopicSession method. The possible argument values can be:

- Session.AUTO_ACKNOWLEDGE: In this scenario the session acknowledges a client's receipt of a message automatically when the client has returned from a call to receive or in the scenario when the message listener called returns successfully.

- Session.CLIENT_ACKNOWLEDGE: In this method the client acknowledges a message by calling the acknowledge method of the message.

- Session.DUPS_OK_ACKNOWLEDGE: Whenever this option is used it simply instructs the session to acknowledge the delivery of a message. It should only be used by consumers who can tolerate duplication of messages.

What do you understand by the creation of durable subscriptions?

In case the user wants to make sure that the application should receive all the published messages the persistent mode of delivery of messages should be used. The method TopicSession.createSubscriber creates a non-durable subscriber when executed. The non-durable subscriber can only receive the messages that were published while the subscriber itself was active. Instead, if the method used was TopicSession.DurableSubscriber it will lead to the creation of a durable subscriber. The advantage of a durable subscriber is that it can receive the published messages that were published while it was not active. A durable subscriber is able to register a subscription that is durable. They can be recognized by unique identity that is retained by the JMS provider. A durable subscription would continue to exist and hold messages until the subscriber calls for an unsubscribe method.

Write the program to count the number of messages in a queue.

The code for counting the number of messages in a queue would be:
package PointToPoint;
import java.util.Enumeration;
import javax.naming.InitialContext;
import javax.jms.Queue;
import javax.jms.Session;
import javax.jms.Message;
import javax.jms.QueueSession;
import javax.jms.QueueBrowser
import javax.jms.QueueConnection;
import javax.jms.QueueConnectionFactory;

public class Browser
{
   public static void main(String[] args) throws Exception
   {

       InitialContext ctx = new InitialContext();

       Queue queue = (Queue) ctx.lookup("queue/queue0");

       QueueConnectionFactory connFactory = (QueueConnectionFactory) ctx.
lookup("queue/connectionFactory");

       QueueConnection queueConn = connFactory.createQueueConnection();

       QueueSession queueSession = queueConn.createQueueSession(false,
Session.AUTO_ACKNOWLEDGE);

       QueueBrowser queueBrowser = queueSession.createBrowser(queue);

       queueConn.start();

       Enumeration e = queueBrowser.getEnumeration();
       int numMsgs = 0;

       while (e.hasMoreElements())
       {
           Message message = (Message) e.nextElement();
           numMsgs++;
       }

       System.out.println(queue + " has " + numMsgs + " messages");

       queueConn.close();
   }
}


How can asynchronous messaging deadlocks be avoided?

Asynchronous messages in JMS can be deadlocked if the close() method of a session is inside a user level synchronized block. To prevent these deadlock from occurring the user must specify the close() method outside of the area of the user-synchronized block.
An example of a code snippet :
public class CloseTest()
{
   private void xxx()
   {
       synchronized (this)
       {
           create connection/session/consumer
           initialize and set a listener for this consumer;
           wait();
           connection.close();
       }
   }
   private void onMessage(Message message)
   {
       synchronized (this)
       {
           notify();
       }
   }
}

In the above example before the connection.close() method is closed, another message can be delivered to the onmessage routine. The monitor lock for the CloseTest method is owned by the main() method. Prior to the closetest sending a message the JMS sets the inlistener as the state. This will create the scenario of a deadlock in case the onMessage routine tries to get the access of the monitor lock.

How can a MDB transaction be rollbacked? Give example.

To rollback a transaction with the MDB the user can make use of the weblogic extension TXhelper which helps in automating the process of transaction rollbacks. The other approach can be to use the MDB context as reference to rollback.

For ex. Using the MDB context for rollbacks:
UserTransaction ut = weblogic.transaction.TXHelper.getUserTransaction();
ut.setRollbackOnly();

OR

private MessageDrivenContext context;
public void setMessageDrivenContext(MessageDrivenContext mycontext)
{
   context = mycontext;
}
public void onMessage(Message msg)
{
   try
   {
       // some logic
   }
   catch(Exception e)
   {
       System.out.println("MDB doing rollback");
       context.setRollbackOnly();
   }
}

Write a program to send a message to a queue.

The JMS has two primary models of messaging point to point and publish / republish. The queue is used in the point to point model.

The code for sending a message to a queue:

import javax.jms.ConnectionFactory;
import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.Queue;
import javax.jms.Session;

import org.springframework.jms.core.MessageCreator;
import org.springframework.jms.core.JmsTemplate;
import org.springframework.jms.core.JmsTemplate102;

public class JmsQueueSender
{
   private JmsTemplate jmsTemplate;

   private Queue queue;

   public void setConnectionFactory(ConnectionFactory cf)
   {
       jt = new JmsTemplate102(cf, false);
   }

   public void setQueue(Queue q)
   {
       queue = q;
   }

   public void simpleSend()
   {
       this.jmsTemplate.send(this.queue, new MessageCreator()
       {
           public Message createMessage(Session session) throws JMSException
           {
               return session.createTextMessage("hello queue world");
           }
       }
   }
}

What do you understand by the Event driven architecture?

The event driven architecture as the name suggests is built on the concept of processes and events which can be dynamic and complex. Whenever an action occurs in a system, the particular process sends an event to the entire system. This even states that an action took place. A single event can cause more processes to become active which in turn themselves can further give rise to more processes. A good example of the EDA can be seen in insurance management systems. Suppose if a user changes his address, this is considered as an event and the entire system is allowed to know that a user address change has occurred. This can lead to many changes in other parts/ domains of the system. This is managed by the EDA architecture. These systems use EDA as it can localize changes and does not require the implementation of a single large central processing engine.

What are the ways in which BytesMessage can be used?

The byte message is a special form of message which contains in itself a payload of primitive bytes. Since they carry bytes they can be used to transfer data between two communicating applications. The BytesMessage can also be useful when a JMS implementation is used just to act as a mode of transport between the systems. The message payload remains opaque to the client. When a primitive type is stored the information is converted into a byte representation and after it is stored as the payload in the bytemessage. The different data types stored cannot be differentiated from each other in this form of representation. Since no distinction can be made on the different types of data stored the chances of errors while reading the data can increase. It is always suggested that a payload be read in the same order as it was created by the sender.

What are the different ways in which the message delivery can be made more reliable?

The different ways in which the reliability of the message delivery system can be effected are:
- Controlling message acknowledgement: The user can specify different levels of control over acknowledgements.

- Specifying message persistence: The user can set the messages to be persistent so that they are not lost in case of failures.

- Setting priority levels: The user can set the priority of various messages.

- Message expiry: The user can set the expiration time for messages.

- Temporary destinations: The user can create temporary destinations which last only till the duration of the connection.

Write the code for an asynchronous class queue receiver.

The code for the asynchronous class queue receiver would be as follows:
package pointToPoint;
import javax.naming.InitialContext;
import javax.jms.Queue;
import javax.jms.Session;
import javax.jms.Message;
import javax.jms.TextMessage;
import javax.jms.MessageListener;
import javax.jms.JMSException;
import javax.jms.ExceptionListener;
import javax.jms.QueueSession;
import javax.jms.QueueReceiver;
import javax.jms.QueueConnection;
import javax.jms.QueueConnectionFactory;

public class AsyncReceiver implements MessageListener, ExceptionListener
{
   public static void main(String[] args) throws Exception
   {
       InitialContext ctx = new InitialContext();

       Queue queue = (Queue) ctx.lookup("queue/queue0");

       QueueConnectionFactory connFactory = (QueueConnectionFactory) ctx.lookup("queue/connectionFactory");

       QueueConnection queueConn = connFactory.createQueueConnection();

       QueueSession queueSession = queueConn.createQueueSession(false,Session.AUTO_ACKNOWLEDGE);

       QueueReceiver queueReceiver = queueSession.createReceiver(queue);

       AsyncReceiver asyncReceiver = new AsyncReceiver();
       queueReceiver.setMessageListener(asyncReceiver);
       queueConn.setExceptionListener(asyncReceiver);

       queueConn.start();

       System.out.print("waiting for messages");
       for (int i = 0; i < 10; i++)
       {
           Thread.sleep(1000);
           System.out.print(".");
       }
       System.out.println();

       queueConn.close();
   }

public void onMessage(Message message)
{
   TextMessage msg = (TextMessage) message;
   try
   {
       System.out.println("received: " + msg.getText());
   }
   catch (JMSException ex)
   {
       ex.printStackTrace();
   }
}

public void onException(JMSException exception)
{
   System.err.println("an error occurred: " + exception);
}
}

Write the code to create a session bean performing JMS operations.

A bean should always contain the code for the initialization of JMS administered objects to be used. To prevent the repetition of this code the user can make use of the ejbCreate method.

For ex. Code snippet for the usage of create method:
public class EjbCompBean implements SessionBean
{
   ...
   ConnectionFactory cf = null;
   Topic topic = null;

public void ejbCreate()
{
   ....
   ictx = new InitialContext();
   cf = (ConnectionFactory)
   ictx.lookup("java:comp/env/jms/conFactSender");
   topic = (Topic) ictx.lookup("java:comp/env/jms/topiclistener");
}
   ...
}

In what ways the clustering process can be improved?

Some of the better practices that can be used for clustering are:

- Minimization of the JMS client-side state: The user can perform work in the transacted sessions. Checkpoints and saves must be made for full recovery.

- Non-durable subscriptions to be avoided: The usage of non-durable subscriptions should be kept to a minimum.

- Avoid keeping durable subscriptions alive for long periods: The users should know the fact that only one durable subscription can be active at any given time. The load balancing and clustering leads to the creation of multiple application instances.The JMSexception method used prevents duplicate instances from executing and hence helps in maintaining an organized system.
Write the server side code for FAILOVER.

Below is the code snippet for a queue that is server side fail-over tolerant:
while (notShutdown)
{
   Context ctx = new InitialContext();

   QueueConnectionFactory qcf = (QueueConnectionFactory)
   ctx.lookup(QCF_NAME);
   Queue q = (Queue) ctx.lookup(Q_NAME);
   ctx.close();

   try
   {

       QueueConnection qc = qcf.createQueueConnection();
       QueueSession qs = qc.createQueueSession(true, 0);
       QueueSender snd = qs.createSender(q);
       QueueReceiver rcv = qs.createReceiver(q);

       qc.start();

       while (notDone)
       {
           Message request = rcv.receive();
           Message reply = qs.createMessage();

           snd.send(reply);
           qs.commit();
       }

       qc.stop();
   }
   catch (JMSException ex)
   {
       if (transientServerFailure)
       { // retry }
       else
       {
           notShutdown = false;
       }
   }
}

What are the possible ways to set aside a message and then acknowledge it later?

By default there are no primitives defined for performing such actions. Although there can be two possible solutions:

- Multiple session usage:
The user can set aside a message and acknowledge it later by using the following code snippet:
while (true)
{
   Create a session, subscribe to one message on durable subscription
   Save session reference in memory
   To acknowledge the message, find the session reference and call
   acknowledge() on it.
}

- Suspend work:
The user can use another approach where the transactions are used and the work is suspended. Code snippet:
start transaction
while(true)
{
   message = receive();
   if (message is one that I can handle)
   process the message
   commit
}
else
{
   suspend transaction
   put transaction aside with message
   start transaction
}