Friday, September 29, 2006

Visual Studio 2005 SDK

All of us desire some sort of customization/extensibility in the existing off-the-shelf products/solutions. This is primarily because sometimes the product/solution cannot fulfill all your requirements. There could be various reasons for this. One reason is that product/solution may have been designed with generic features in mind, which would be suitable for most of the requirements/usage scenarios. But if you have very specific requirements then you end up wanting more from it. This is where, customization or more popular term add-ins come in to the picture. Fortunately, Microsoft products provide required space (in terms of API, documentation etc) for customization. And if you have used Excel-with VBA, Microsoft Business Solution products, BizTalk etc., then you will definitely realize the point I am trying to make here.
For Microsoft developers, Visual Studio IDE is a real beauty. It also provides starter kits, various different project templates, different project types etc. These items contain specific files in it. So, if you use ASP.NET web application template, then you will have few files ready for you including weform1.aspx in the solution explorer. All this is fine and cool things to have. But as I said earlier, if you want specific features then you need to do something. For example, if you want that your developers should use your organization specific project template or starter kit, and then you need to first create the project templates, starter kits etc. This is what I call as custom project template/custom starter kits/ custom themes.
Visual Studio 2005 SDK is provided for customization and extensibility of your own Visual Studio 2005 to suit your requirements. And also with the Visual Studio Object you can do all these things in your own language.

How can you get the maximum out of VS 2005 SDK?
Ø Automate Repetitive Actions using Macros
It is now possible to capture/record the repetitive actions in a macro and execute it whenever you want. For example, if your developers using series of keystrokes frequently, then one can record the macro out of it. Next time, developers just have to run the macro instead of performing same series of keystrokes again. I know QA people will say that this is not a big deal since they already do this using automation tools like Mercury, QTP etc. But developers will realize its importance for them. So, even if you are using CTRL+V most of the times, if you use macro you will have to click just once. J Sorry folks, I know that it was a bad example. J But the fact is that if you can use this feature effectively then you will end up saving lot of time with ease.
Ø Maintain Consistency Across the Projects
You can now create your own templates, starter kits or themes. For example, you can create custom template for the web application. In this template, I can put resource files for 3-4 languages. I can also web.config file and few XMLs in it. When developers will use this project template, they will find above mentioned things present in the solution explorer. This feature is really handy in large organizations. For the beginners, you can give the starter kits to start with.
Ø Integrate With External Products and Processes
It allows your IDE to integrate with third-party tools. So, if you do not want to spend time on doing these things on your own, then you can check the available product catalogue on the Microsoft site and but it.
Ø Use Automation Object Model to Create Add-Ins or Wizards
Now, the complete automation object model can be leveraged to create your own add-ins and wizards. Add-ins are compiled applications that manipulate the environment and automate tasks. Add-ins can be invoked in a variety of ways, including the Add-in Manager, toolbar commands or buttons, the devenv command line, and through events such as IDE startup
Ø Make a Product Out Of Your Customization And Sell It To Others
And finally, last but not least. You can bundle all your customizations in one product and sell it to the world. J You can join the MS partner program and make sure that your product appears in the product catalogue list. And believe me; this product development is really easy. :)

In case you are charged up by this blog, then do visit the MS URL below to download VS 2005 SDK V3 RTM.
http://msdn.microsoft.com/vstudio/extend/default.aspx

After you install it, you can see two main options in your program menu as: Help Studio Lite and Visual Studio 2005 Experimental Hive. As usual, you have hundreds of sample applications ready with the SDK. :) Wish you Happy VS2005 Extensibility!


Cheers,
Amol

Monday, September 25, 2006

Web Applications…..where are we heading?

At the early stage of my career, I was involved in a project using Microsoft Active Server Pages 1.0. It was the time, when there were no rich controls, no data grid kind of stuff. We used to write lots of scripts (client as well as server). It was purely processing the stuff on server and sending back the HTML to the requesting client.

Last seven years down the line, I can see tones of improvements in web applications. These changes are in terms of rich User Interface, faster processing speed on the server, faster response, from simple ISAPI filters, interpretations to compiled versions. I must say that ASP.NET has revolutionized the web applications completely. Look at the way, programmers are writing the code. The classic separation of presentation and business logic layer; also the incorporation of MVC pattern just amuses me sometimes.

The birth of ASP.NET 1.0 gave us lots of rich user controls. Look at the datagrid control. These controls have in-built capabilities like paging, sorting, customization etc. This saves lot of time for the developers and also makes rich end-user experience. And now, we have ASP.NET 2.0.

With the concepts like web parts, navigation controls and of course AJAX, these web apps are exceeding our expectations. And if we add DHTML, flash / flash remoting to this list then we get awesome results. With increased bandwidth and high-end servers, are we going to replace the desktop applications? I think that though it will not replace the desktop apps completely, it will reach close to it. Users can access the web apps from browsers, pdas, cell phones, smart phones, blackberry etc. The usage of thin client was always the plus point for web applications. And it will remain so. Over all, the future of web applications looks promising. There will be lot of new approaches in the way we use web applications. Some of the applications will eventually lead to somewhat FAT client.
Ability to go offline, do the stuff and again go online to upload the changes. I think Asynchronous programming (AJAX) is going to be the key technology in future for web applications. Rich graphics, web-based office solutions and web-based development/ IDEs is also a possibility.

Cheers,
Amol

Should you use Directory or RDBMS?

In most of the development project we use relational databases. But if you are working on a project which uses directory services (like Active Directory or Sun One etc.), then chances are high that you would end up in comparing directories versus relational databases. If you have been using RDBMS frequently then you are definitely going to apply same logic for directories as well. And this is not at all a good idea.

Alternatively, consider this scenario. You are in the middle of designing one application, when suddenly you come with this question. “Do I need to use Directory or Relational Database here?” Well, this is not really a difficult question to answer. But one needs to have thorough understanding of both the directories and relational databases to answer above mentioned question. If you go ahead with inadequate understanding of either of them, then it can bring your application down on the knees.

It is very important to understand the similarities and differences between these two. This blog is just an attempt to resolve few doubts with respect to directories and RDBMS. I hope that next time you face situations as mentioned above; you are better equipped to take the call.

First of all, let us try to understand what do you mean by a Directory.

Ø So, what do you mean by a Directory?
A directory is a specialized database specifically designed for searching and browsing of information. Typically, it stores typed and ordered information of objects. Directories are tuned for better read performance. So, it typically performs more read operations than write operations. That does not mean that you cannot perform write operations.

So, now the question is how is it related to LDAP? Well, LDAP (Lightweight Directory Access Protocol) is an open-standard protocol for accessing X.500 directory services. The protocol runs over Internet transport protocols, such as TCP. The LDAP Standard consists of schema definitions, LDIF file exchange formats and definitions for some object classes. If directory is LDAP-compliant, then it can interpret and respond to LDAP request from LDAP client applications. In a network, a directory tells you where in the network something is located. On TCP/IP networks (including the Internet), the domain name system (DNS) is the directory system used to relate the domain name to a specific network address (a unique location on the network). However, you may not know the domain name. LDAP allows you to search for an individual without knowing where they're located (although additional information will help with the search).

Most of us have used Microsoft Active Directory or Sun One Active Directory. Now, you can relate the definition of directory to above mentioned directories.
Okay, we know the definition of the directory. Now, what are the typical characteristics of directories?
1. Static Data:
The data stored in the directory is not really subjected to change or frequent modifications.
2. Hierarchical:
It is capable of storing objects in a hierarchical fashion for organization and relationship. An LDAP directory is organized in a simple tree hierarchy consisting of levels like: root directory ->countries ->Organizations within those countries-> Organization units- >Individuals.
3. Standard Schema:
It uses standard schema, which is available to all applications making use of it.
4. Object-oriented:
It represents entities and objects. Objects are derived from objectclass and are collection of attributes.
5. Multi-valued attribute:
The attributes can have multiple values.
6. Distributed:
It is distributed in nature. It can be distributed among many servers.
7. LDAP Protocol:
This lightweight protocol is used to access the directories.
8. Transactions are not supported.
It does not support transactions. But if you want you can create custom transaction management with your client application.


Ø What are the characteristics of RDBMS?
Most of us are aware of RDBMS concepts along with Codd’s 12 rules. SO, instead of giving details of it, let’s focus on characteristics of it as compared to Directory.
1. Dynamic/ frequently changing Data:
The data stored is frequently updated. There are more write/update operations. Alternatively, it can be also used to store vast amount of historical data. It can be later used for data mining or creating data cubes. This is really useful for business intelligence.
2. Relational:
Data is stored in the form of rows and columns or in the tabular format.
3. Custom Database Schema:
The database schema is specific to applications. It can be anything from the simple schema to star schema, snowflake schema etc.
4. Complete Data Models:
It typically uses complex data models with various tables, key constraints, join operations etc.
5. Transactions Supported:
It supports transactions and thus follows the ACID properties of a transaction.
6. Data Integrity:
It uses many complex models for data integrity right from transaction rollback, referential integrity etc.
7. SQL:
One can make use of SQL to fire various select/insert/update/delete queries against the data stores. It also allows us to make use of stored procedures, views and triggers etc.
Now, having seen the characteristics of both directories and RDBMS, one might think that is it possible to have marriage of these two? Answer to this question is YES. You can have LDAP directory as an application running on top of RDMBS. For example: Oracle and IBM provides this. Now, is this a good idea or not? Well, to answer this question, I will have to change the focus of this blog. And I do not want to change the focus. I will try to address this question in some other blog.

Ø Fine, so in which scenarios I should use Directories?
Based on the above information, now you are in a better position to judge when to use directory and when to use RDMBS. Still, what are the common scenarios where in it is recommended to use directories?
Security:
It allows security down to the attribute level. Many Directory-enabled applications are available to extend directory security mechanisms. You can also use security features to enable access to particular resources. Once this security rules are defined, then do not change frequently. That is the reason Identity Management applications use LDAP directory extensively.
Most of the RDBMS offer column-level security, but the advanced security solutions available with directories are far more flexible and granular.
Data:
It is really important to know what kind of data you are going to handle. Based on its characteristics, you can figure out the solution. For example, if data is going to be static most of the time with less write operations or it is going to contain multi-value attributes. You do not need transactions. Then you know that directory can be best suited here.
Business Requirement:
This is the most important factor on deciding the solution. Based on the requirements, cost saving options you can go for either of these solutions.

Cheers,
Amol.

Friday, September 22, 2006

Domestic (India) Software Market Challenges

We are working for overseas clients. We earn in dollars/ euros/pounds. We are helping people of developed countries to make their lives better. Recently, I have heard the story of one Indian Company who implemented traffic control and decongestion system in London. I am sure that each one of us has lots of stories like this to tell. Some of the terms like outsourcing, BPO, KPO etc. have become common words in our lives.
But there is a huge potential for IT Infrastructure and software requirements for the domestic market. I strongly believe that this market is not yet tapped completely. There are going to be lot of opportunities here as well. We have seen the mobile revolution here. The list of mobile companies, their subscribers is just increasing by leaps and bounds. Lots of foreign companies are investing their money. Lots of companies are starting their offices here. If you just look around, you can see the revolution or upward trend in every business areas. It can be anything from service providers to construction, infrastructure development, insurance and banking and so on. This is also applicable to hardware market. Lots of PCs, cells, laptops etc. are entering in the lives of common people. This scenario looks very promising. People (like me) who have experienced the recession in IT Industry would agree to this. But somehow this upward trend is not completely visible in domestic Software market. Of course it is definitely better than compared to previous 5-6 years.
There are lots of surveys/predictions available in the market. Everybody is just talking about the figures, which would be crossed by various domestic industries. Right from Automobile industry to health care and tourism industry, everything is going to grow tremendously. Some of the predicted figures are just unbelievable. Middle class person now has access to world class products in India only and that is also in affordable rates. People are willing to spend. But are the domestic companies willing to spend/invest in IT solutions? Do these companies have decided their IT roadmap for coming 10 years? Are these companies willing to consider India Software vendors? Are these companies clear about their IT requirements? Some of the companies have separate IT department. But do they enough talent/skills/bandwidth to understand their requirements and build custom solutions? …… I am sure that most of the answers to above mentioned questions would be negative…..So, what are the challenges to get positive answers for above questions.

Ø How much money we can save?
If you try to sell your product to any person, this is the first question he/she will probably ask. I am talking about the persons who are probably in their late fifties. These are the people who have struggled a lot and now reached to the topmost positions in the companies. These are the people who are actually running local companies. These are the key decision makers. When you have an opportunity to talk to them or to market one of your software product/solution, then be assured that you would have to answer this question. So, key point here is to understand their business. It is also necessary to understand their current IT investment. It is important to step into their shoes and see what would be their requirements. Once you have the enough data, then you need to apply some algorithms/formulae in order to come up with the total cost which would be saved. This figure is the starting point for any future dealings with that person. :)

Ø Okay, my company will save x amount if we buy your solution. But what are the other benefits?
To penetrate deeper into the India market, we need to convince them what are the benefits they would get. One needs to build lots of case studies/presentations in order to convince these decision makers. It is likely that these people are reluctant to mane any change in the patterns and practices followed in their organizations. They have solid reasoning as well for not to change. They will say that these patterns are being used since last 30-40 years, why should we change it now. Idea, here is to convince them that the world is changing. And with lots of advancement in communication and technology, these changes would help them in better decision making.

Ø Why should I buy from you?
Most of the domestic companies would buy products like SAP, oracle etc. These companies will spend lakhs of rupees in buying these products. But they will be little bit reluctant to but solutions from local software vendors. We need to convince them that we can deliver custom solutions what they want. We need to tell them that we know your business. We will give you local support 24 X 7.

Ø Requirements are not clear.
This is really tough tasks to get requirements from domestic customers. I had experienced this in the past. Some companies want to invest money in IT just because the funds have been allocated for the year. In this case, it will be highly difficult for Software people to get the requirements. Trick here it to stay calm and listen what they want. If they are not able to tell then we need to propose them their requirements. Of course this needs thorough understanding of their business. We need to propose them, which could be really used by these people.

Ø Traditional/ do not want to change Mindset
Some people are not at all willing to change or should I say adapt to the situation. If you face a person with this kind of mindset, then there is a problem. These people will say that why do we need computers? We are doing this since last so many years. We did not use laptops/ cell phones to do business! First of all, we need to make them comfortable to talk with us. Then you can say that “Listen, we were using bullock carts earlier. Everything was fine for us. Then we had trains and cars. We started traveling by cars/trains/planes. This is saving lot of time for us. And if we don’t use this now, then you will be far behind the race!”

Ø IT Road Maps
Most of the domestic companies will not have IT road map fixed for next years. These companies need to think on their road maps.

Ø Customized Products and Quality
Most of the off-the-shelf products are not directly suitable for these companies. We need to give customized solutions and better quality products. The quality is one of the important factors here. This is because these companies might have purchased software from local small vendors at some point of time. And chances are high that these companies were frustrated because of the poor quality work provided to them at cheap cost. We need to build trust relationship with them and ensuring them that they would get better quality solutions.

Cheers,
Amol Kulkarni.

Top 5 Web Service Mistakes…revisited

Last year, while reading few articles I came across one beautiful article written by Paul Ballard. That article was about top 5 web service mistakes. I had worked extensively on the web services using .NET and after reading the article I realized the importance of it. These mistakes are not syntax mistakes but they are logical mistakes. These are architectural mistakes and not technological. The key is to avoid them from beginning only and not at the time of deployment.
So many people are writing .NET based web services now days. It is very easy to write a web services using IDEs especially Visual Studio.NET. This IDE is a real beauty. All you have to do is click on create new project of type ASP.NET Web Service. The VS.NET then writes one default web method for you. All you have to do is rename it, uncomment the code and then put your business logic in it. Compile it and then you are ready for the deployment. Necessary WSDL will be created for you by VS.NET. When things are so simple, beginners find it very easy to write web services in their resumes. Such kind of easy things, if not planned and handled properly then could lead to disasters. This is the reason why you should consider following mistakes before writing web services.


Using .NET Specific Types
For a web service to be interoperable with any technology, its data contracts must be based on types defined by XML schema. If you use .NET Specific types as parameters or return values for a web service, the XML it produces may not be interoperable with Java or other technologies. For example: Passing a Dataset as a result in a web service.
In this case, the structure of the Dataset is not known until runtime. So there is no way for the WSDL to describe the internal structure. This is the web service equivalent of late binding. Since client developer has no idea about the structure of the object, they won’t be able to generate a very useful proxy. They will have to parse the XML returned to find the data they are looking for. You have probably seen demos in the past where the presenter returned a DataSet to a .NET Framework client and somehow it magically became a DataSet on the other end. This works because the Web Service infrastructure in the .NET Framework looks for a special flag in the runtime-provided XML Schema definition called IsDataSet.


Not Taking Advantages of ASP.NET
Most of the developers failed to understand the exact relationship between .NET web services and ASP.NET. The biggest drawback today in consuming Web Services is performance. Web Services developed in .NET are deployed in ASP.NET. They have access to all features of any ASP.NET Application. For performance, the two most important features are Caching and SessionState. Output Caching allows the result of a service method request to be stored in the server’s cache. Subsequent requests with the same parameters will get result from the cache.


Not Enough Bang for the Buck
The Web Service should be designed to maximize the amount of work performed with each request. It is always good idea to reduce the number of server trips. For better performance, consider combining smaller requests into a larger single request. For example: Let us assume that there is a Web Service which gives you current market price for the requested stock. Typically, developers would write a web method to accept one stock symbol and return one price at a time. So in the demo version of the Web Service, you would make ten trips to server. This can be avoided by combining multiple requests in one request.


Using Web Services for Data Access
Many Architects and Developers tend to see Web Services as means to share data. Web Services don’t just provide access to data, but they provide access to an organization’s proprietary business knowledge. The best example of the Web Service exposing functionality and not data is Google’s Web Service for searching web pages. Google has leveraged their extensive knowledge of how to search the web and that is where the value of their Web Services comes from, not their data. Remember that consumers of your Web Service want your business expertise and not just data


Trusting the Client Application
Apart from giving access to business knowledge, Web Services also have responsibility to protect business and integrity of the data. Typically, Web Services are developed at the same time as the User Interface to consume it. The Web Service acts as a backend to the application and so security is left to the UI only. The security of the UI and Web Services should be separated. Keep in mind that other applications (built by other organizations) will also use your backend. Don’t trust data that Web Services receive. The data being returned should also be protected. Tracking of Web Service usages is also important. You can also think about using SOAP Extensions for this.


There are some the mistakes which are bound to happen when you work on web services. So, next time when you want to write web service, you know what should be avoided. J
If you are core web services developer using .NET 1.1 then following are some enhancements in .NET 2.0. You would find them interesting.
1. Network Information API
2. Auto Proxy Discovery
3. HTTP Response Compression
4. Supports Decompression
5. Supports SOAP 1.1 as well as SOAP 1.2


Cheers,
Amol

Important tips for improving performance of .NET applications

When you are working on .NET based applications, it is very important to know key areas which would affect the performance of your application. MSDN has lots of information about this. I have tried to consolidate and list down some important tips for the performance tuning of .NET based applications. I hope that you will find this useful. Some of the tips are applicable to .NET 2.0 only. Happy Performance Tuning! :)

Use of Generics

  1. Language features collectively known as generics act as templates that allow classes, structures, interfaces, methods, and delegates to be declared and defined with unspecified, or generic type parameters instead of specific types
  2. It is recommended that all applications that target Version 2.0 use the new generic collection classes instead of the older non-generic counterparts such as ArrayList
  3. Version 2.0 of the .NET Framework class library provides a new namespace, System.Collections.Generic, which includes several ready-to-use generic collection classes and associated interfaces. Other namespaces such as System also provide new generic interfaces such as IComparable. These classes and interfaces are more efficient and type-safe than the non-generic collection classes provided in earlier releases of the .NET Framework. Before designing and implementing your own custom collection classes, consider whether you can use or derive a class from one of the classes provided in the base class library.
  4. Using generic collections is generally recommended, because you gain the immediate benefit of type safety without having to derive from a base collection type and implement type-specific members. In addition, generic collection types generally perform better than the corresponding nongeneric collection types when the collection elements are value types, because with generics there is no need to box the elements.
  5. The following generic types correspond to existing collection types:
  • List is the generic class corresponding to ArrayList.
  • Dictionary is the generic class corresponding to Hashtable.
  • Collection is the generic class corresponding to CollectionBase. Collection can be used as a base class, but unlike CollectionBase it is not abstract, making it much easier to use.
  • ReadOnlyCollection is the generic class corresponding to ReadOnlyCollectionBase. ReadOnlyCollection is not abstract, and has a constructor that makes it easy to expose an existing List as a read-only collection.
  • The Queue, Stack, and SortedList generic classes correspond to the respective nongeneric classes

Weak References

  1. Weak References are suitable for medium-to-large sized objects stored in a collection.
  2. Using Weak Reference is one way of implementing caching policy.
  3. By using weak references, cached objects can be resurrected easily if needed or they can be released by garbage collection when there is memory pressure.

Memory Management

  1. Check that your code calls Dispose or Close on all classes that support these methods.Common disposable resources include: database related classes, files related classes, stream related classes and network related classes.
  2. Check that your code does not call GC.Collect
  3. Finalization has impact on performance. Identify which classes need finalize. Typically, classes using unmanaged resources will need it. Check that any class that provides a finalizer also implements IDisposable. Avoid implementing a finalizer on classes that do not require it because it adds load to the finalizer thread as well as the garbage collector .
  4. Identify potentially long-running method calls. Check that you set any class-level member variables that you do not require after the call to null before making the call. This enables those objects to be garbage collected while the call is executing. There is no need to explicitly set local variables to null because the just-in-time (JIT) compiler can statically determine that the variable is no longer referenced.

Looping and Recursion

  1. Even the slightest coding inefficiency is magnified when that code is located inside a loop. Loops that access an object's properties are a common culprit of performance bottlenecks, particularly if the object is remote or the property getter performs significant work.
  2. Repeated accessing of object properties can be expensive. Properties can appear to be simple, but might in fact involve expensive processing operations. Avoid repetitive field or property access.
  3. If you do use recursion, check that your code establishes a maximum number of times it can recurse, and ensure there is always a way out of the recursion and that there is no danger of running out of stack space.
  4. Optimize or avoid expensive operations within loops.
  5. Copy frequently called code into the loop.
  6. Consider replacing recursion with looping.
  7. Use for instead of foreach in performance-critical code paths.

String Operations

  1. Avoid inefficient string concatenation.
  2. Use + when the number of appends is known
  3. Use StringBuilder when the number of appends is unknown.
  4. Treat StringBuilder as an accumulator.
  5. Use the overloaded Compare method for case insensitive string comparisons.

Arrays

  1. Prefer arrays to collections unless you need functionality. Arrays also avoid the boxing and unboxing overhead.
  2. Use strongly typed arrays.
  3. Use jagged arrays instead of multidimensional arrays.
  4. Arrays have a static size. The size of the array remains fixed after initial allocation. If you need to extend the size of the array, you must create a new array of the required size and then copy the elements from the old array.
  5. Arrays support indexed access. To access an item in an array, you can use its index.
  6. Arrays support enumerator access. You can access items in the array by enumerating through the contents using the foreach construct (C#) or For Each (Visual Basic .NET).
  7. Memory is contiguous. The CLR arranges arrays in contiguous memory space, which provides fast item access.

Collections

  1. Initialize the collection to an approximate final size. It is more efficient to initialize collections to a final approximate size even if the collection is capable of growing dynamically.
  2. Storing value types in a collection involves a boxing and unboxing overhead. The overhead can be significant when iterating through a large collection for inserting or retrieving the value types. Consider using arrays or developing a custom, strongly typed collection for this purpose.

Hash Table:

a. Do you store small amounts of data in a Hashtable? If you store small amounts of data (10 or fewer items), this is likely to be slower than using a ListDictionary. If you do not know the number of items to be stored, use a HybridDictionary.

b. Do you store strings? Prefer StringDictionary instead of Hashtable for storing strings, because this preserves the string type and avoids the cost of up-casting and down-casting during storing and retrieval.

c. Have you though of using generic class called as Dictionary? The Dictionary generic class provides a mapping from a set of keys to a set of values. Each addition to the dictionary consists of a value and its associated key. Retrieving a value by using its key is very fast, close to O(1), because the Dictionary class is implemented as a hash table.

d. Do You Use SortedList?You should use a SortedList to store key-and-value pairs that are sorted by the keys and accessed by key and by index. New items are inserted in sorted order, so the SortedList is well suited for retrieving stored ranges.You should use SortedList if you need frequent re-sorting of data after small inserts or updates. If you need to perform a number of additions or updates and then re-sort the whole collection, an ArrayList performs better than the SortedList.

ArrayList:

a. Do you store strongly typed data in ArrayLists? Use ArrayList to store custom object types, particularly when the data changes frequently and you perform frequent insert and delete operations.

b.Do you use Contains to search ArrayLists? Store presorted data and use ArrayList.BinarySearch for efficient searches. Sorting and linear searches using Contains are inefficient. This is of particular significance for large lists. If you only have a few items in the list, the overhead is insignificant. If you need several lookups, then consider Hashtable instead of ArrayList.

Reflection and Late Binding

  1. Prefer early binding and explicit types rather than reflection.
  2. Avoid late binding.
  3. Avoid using System.Object in performance critical code paths.
  4. Framework APIs such as Object.ToString use reflection. Although ToString is a virtual method, the base Object implementation of ToString uses reflection to return the type name of the class. Implement ToString on your custom types to avoid this.
  5. Avoid using System.Object to access custom objects because this incurs the performance overhead of reflection. Use this approach only in situations where you cannot determine the type of an object at design time.

Class Design Considerations

  1. Do not make classes thread safe by default.
  2. Consider using the sealed keyword.
  3. Consider the tradeoffs of virtual members.
  4. Consider using overloaded methods.
  5. Consider overriding the Equals method for value types.
  6. Know the cost of accessing a property.
  7. Consider private vs. public member variables.
  8. Limit the use of volatile fields.

Threading Considerations

  1. Minimize thread creation.
  2. Use the thread pool when you need threads.
  3. Use a Timer to schedule periodic tasks.
  4. Consider parallel vs. synchronous tasks.
  5. Do not use Thread.Abort to terminate other threads.
  6. Do not use Thread.Suspend and Thread.Resume to pause threads.

Asynchronous Call Considerations

  1. Consider client-side asynchronous calls for UI responsiveness.
  2. Use asynchronous methods on the server for I/O bound operations.
  3. Avoid asynchronous calls that do not add parallelism.
  4. For each call to BeginInvoke, make sure your code calls EndInvoke to avoid resource leaks.

Cheers,
Amol

Overview of Reporting Services for SQL Server

Generating reports out of SQL Server data cubes was never so easier before the birth of SQL Server Reporting Services. These services offer much more flexibility and features to generate reports in various formats and in very little time. Report subscription is one more new feature added to the list.
I would like to discuss brief overview of the Reporting services in this blog.
Reporting Services:
With tones of historical data available in the organization, it becomes very difficult to use it intelligently and wisely. This information can be used to identify various trends in the organization. It is definitely useful in improving the efficiency and effectiveness within the organization. This is why Business Intelligence (BI) is a buzz word today.
Reporting Services enable employees at all levels of an organization to realize the promise of BI to promote better decision making.
Delivered through:
· Traditional and interactive reports
· Scalable, manageable and embeddable server infrastructure
· Integration with SharePoint, Office applications, browser and other familiar tools
· Single platform and tools for all types of structured data
Some of the key points are:
· Server based reporting engine.
· Creates tabular, matrix, graphical and free form reports.
· Reports are viewed over the web.
· Easily integrate into existing applications.
· Report designer very similar to MS Access.
· Standardized Report Definition Language (RDL).

Following are some the mail report formats supported by Reporting Services:
· HTML3.2 & HTML4.0
· TIFF
· Acrobat (PDF)
· Web Archive File (MHTML)
· Microsoft Excel
· Comma Separated File (CSV)
· HTML With Office Web Components
· XML File with Report Data.




It is also important here to understand the Reporting Life Cycle. The life cycle is divided into 3 parts as Authoring, Management and Delivery.

Authoring
· It is basically the process of creating report definitions through the use of Report Authoring Tools.
· Authoring tools transform the report design into a report definition based on Report Definition Language (RDL).
· The Report Definition contains layout, connection and query information.
· Report Designer can be used within VS.NET.

Management
· Report Definitions, folders, and resources are published and managed as a Web Service.
· Managed Reports can be executed either on demand or a specified schedule.
· Reports can be cached for consistency and performance.
· Report Manager is web-based report access and management tool.

Delivery
· Basically two methods are available such as: on-demand access and Push Subscription.
· For on-demand delivery, user normally selects the report using Report Manager.
· Push subscriptions automatically generate the report as per schedule and store reports to a destination.
· Report Delivery can be automated by using mails, file system or custom delivery options.

Report Processions Flow:

The Intermediate format can be used in variety of ways like: caching, snapshot, history etc. When a report is accessed, the Report Server decides whether to generate the report from the scratch or used cached report. This blog is just to introduce reporting services. I am planning to write about each feature in detail.

Cheers,Amol

How should developers deal with the upcoming technologies?

Every day, we hear or read about new technologies, new models of the cell phones or PDAs, new tools, new versions of the existing tools and so on. This list is just going to get increase and with lot of information available at the tip of the button, it definitely hits us at some point of time. People say that ignorance is bliss. But fact is that one cannot ignore these things in this world. And if you do ignore then you are thrown out of the race. There can be lot of debates on this. In this blog, I would like to focus only on the developers and upcoming technology challenges they are facing.

For Microsoft Developers, the road map for last few years was something like this:
VB (Visual Studio)>OLE>COM/DCOM>COM+>ASP(Visual InterDev)>.BizTalk >NET Beta (C#,VB.NET etc)>ASP.NET>.NET 1.0(Visual Studio.NET 2003)> .NET 1.1>BizTalk Server 2002

And then we have .NET 2.0 and VS 2005 or VSTS. It also includes lot of changes to the framework classes, ASP.NET,C#, VB.NET and other languages. Very soon, MS will come out with .NET 3.0, Vista, WInFx etc.

If some one is working on .NET 1.1 for last 3-4 years, when he/she suddenly sees new version of .NET (2.0), first thing they see is the learning curve. Having been comfortable with the framework version 1.0, it is sometimes frustrating for them to use new version. There can be some syntax changes, new features, tuning of some classes etc. in the new version. But some developers will be reluctant to know it since they have become GURUs in old technology. Yes, based on the requirements project managers or project leaders can force their team members to undergo training and use it for the project. But still some people will think like they are using old version and implement the logic accordingly. This is really a dangerous sign. This can affect the performance of your application. This can lead to disasters literally. The key point here is not use new technology just for the sake of using it or just because it is new and your client wants it. Try to understand the potential, features and usage scenarios for the latest technology. If you are aware of its previous version, then try to compare the feature list and usages between the old version and new version. For example: .NET 2.0 recommends to use generic collection classes and not the collections like arraylist. One should really understand the reasoning behind this. I can use hash table and dictionary class as well. But using dictionary class, is going to give me some improvement in the performance. That is why it is newly added in the list of generic collection classes. Same thing is applicable to generics as well.

There are tones of changes between latest version of ASP.NET and old version i.e. ASP.NET 1.0/1.1. One has to understand the differences and importance of using new features. AJAX/ ASP.NET Atlas give us lot of power to do lot many things asynchronously and eventually making the end user happy and giving him rich user experience. To summarize this, every developers has to ask few questions to him/her as:
Okay, so you have new version of .NET. Tomorrow, you will have one more enhanced and new version of it. So, what do I do?:
Yes, true but if I don’t learn and understand the new versions/functionalities then I will be left much behind the others. There are lot of new developers who are starting their careers with the new versions of tools and technologies. They are going to be ahead all the time if I do not participate in this race. Sooner or later, I will have to learn it. Sooner is better.
Why should I use it just for the sake of using it?:
Yes, you should not use it just for the sake of using it. You must know the importance and its impact.
I am techie person and I want to keep on writing/designing/architecting applications:
Right, but if you are not in sync with the new things. Then, you might be missing something and it might cause flaws in the applications/designs.

This list can also go on increasing. Remember one thing, new technologies are here to help us make our lives better. More we understand it, lesser complexities we will have while writing applications.

Cheers,
Amol.