Saturday, September 20, 2008

Count Occurrences of a Word in a String (LINQ)

class CountWords

{

    static void Main()

    {

        string text = @"Historically, the world of data and the world of objects" +

          @" have not been well integrated. Programmers work in C# or Visual Basic" +

          @" and also in SQL or XQuery. On the one side are concepts such as classes," +

          @" objects, fields, inheritance, and .NET Framework APIs. On the other side" +

          @" are tables, columns, rows, nodes, and separate languages for dealing with" +

          @" them. Data types often require translation between the two worlds; there are" +

          @" different standard functions. Because the object world has no notion of query, a" +

          @" query can only be represented as a string without compile-time type checking or" +

          @" IntelliSense support in the IDE. Transferring data from SQL tables or XML trees to" +

          @" objects in memory is often tedious and error-prone.";

        string searchTerm = "data";

        //Convert the string into an array of words

        string[] source = text.Split(new char[] { '.', '?', '!', ' ', ';', ':', ',' }, StringSplitOptions.RemoveEmptyEntries);

        // Create and execute the query. It executes immediately

        // because a singleton value is produced.

        // Use ToLowerInvariant to match "data" and "Data"

        var matchQuery = from word in source

                         where word.ToLowerInvariant() == searchTerm.ToLowerInvariant()

                         select word;

        // Count the matches.

        int wordCount = matchQuery.Count();

        Console.WriteLine("{0} occurrences(s) of the search term \"{1}\" were found.", wordCount, searchTerm);

        // Keep console window open in debug mode

        Console.WriteLine("Press any key to exit");

        Console.ReadKey();

    }

}

/* Output:

   3 occurrences(s) of the search term "data" were found.

*/

Wednesday, August 27, 2008

New learnings in Java

1. There is no need to initialize static variable because by default it set variable value to zero, but when you use "final" keyword with static then you must have to initialize that variable with value, otherwise you will get compile time error

2. Only a class whose definition is complete can be declared as final, once you declare any class as final den after you cant be extend this class or you cant override any method also.

3. you cant instantiate class which is declared as abstract type

4. Final variable are known as "blank final variable" and "named constant"

5. abstract and final will never work together.

6. You cant declare transient variable inside methods

7. Sometime, Java used to cache variable value in the memory for efficiency purpose, which is not correct in the case of multi-threading. To avoid jvm to store variable value in the cache we can use "volatile" keyword.

8. You can not use final and volatile both at same time.

9. VirtualMachineError, LinkageError are going to handel by "Error" class

10. Each assetions contains boolean expression , if its reutn true when assertion is executed. if its return false, the system throws a special assertion type exceptions.

Where to find wsdl.exe file?

Easy :)

It will be at %HOME_DIR_OF_VISUALSTUDIO_INSTALLATION%\SDK\V2.0\BIN

Creating a Stored Procedure

You can create stored procedures using the CREATE PROCEDURE Transact-SQL statement. Before creating a stored procedure, consider that:

-> CREATE PROCEDURE statements cannot be combined with other SQL statements in a single batch.
-> Permission to create stored procedures defaults to the database owner, who can transfer it to other users.
-> Stored procedures are database objects, and their names must follow the rules for identifiers.
-> You can create a stored procedure only in the current database.
-> When creating a stored procedure, you should specify:
-> Any input parameters and output parameters to the calling procedure or batch.
-> The programming statements that perform operations in the database, including calling other procedures.
-> The status value returned to the calling procedure or batch to indicate success or failure (and the reason for failure).


System Stored Procedures
Many of your administrative activities in Microsoft® SQL Server™ 2000 are performed through a special kind of procedure known as a system stored procedure. System stored procedures are created and stored in the master database and have the sp_ prefix. System stored procedures can be executed from any database without having to qualify the stored procedure name fully using the database name master.

It is strongly recommended that you do not create any stored procedures using sp_ as a prefix. SQL Server always looks for a stored procedure beginning with sp_ in this order:
-> The stored procedure in the master database.
-> The stored procedure based on any qualifiers provided (database name or owner).
-> The stored procedure using dbo as the owner, if one is not specified.
-> Therefore, although the user-created stored procedure prefixed with sp_ may exist in the current database, the master database is always checked first, even if the stored procedure is qualified with the database name.

Important: If any user-created stored procedure has the same name as a system stored procedure, the user-created stored procedure will never be executed.

Grouping
A procedure can be created with the same name as an existing stored procedure if it is given a different identification number, which allows the procedures to be grouped logically. Grouping procedures with the same name allows them to be deleted at the same time. Procedures used in the same application are often grouped this way. For example, the procedures used with the my_app application might be named my_proc;1, my_proc;2, and so on. Deleting my_proc deletes the entire group. After procedures have been grouped, individual procedures within the group cannot be deleted.

Temporary Stored Procedures
Private and global temporary stored procedures, analogous to temporary tables, can be created with the # and ## prefixes added to the procedure name. # denotes a local temporary stored procedure; ## denotes a global temporary stored procedure. These procedures do not exist after SQL Server is shut down.

Temporary stored procedures are useful when connecting to earlier versions of SQL Server that do not support the reuse of execution plans for Transact-SQL statements or batches. Applications connecting to SQL Server version 2000 should use the sp_executesql system stored procedure instead of temporary stored procedures. For more information, see Execution Plan Caching and Reuse.

Only the connection that created a local temporary procedure can execute it, and the procedure is automatically deleted when the connection is closed (when the user logs out of SQL Server).

Any connection can execute a global temporary stored procedure. A global temporary stored procedure exists until the connection used by the user who created the procedure is closed and any currently executing versions of the procedure by any other connections are completed. Once the connection that was used to create the procedure is closed, no further execution of the global temporary stored procedure is allowed. Only those connections that have already started executing the stored procedure are allowed to complete.

If a stored procedure not prefixed with # or ## is created directly in the tempdb database, the stored procedure is automatically deleted when SQL Server is shut down because tempdb is re-created every time SQL Server is started. Procedures created directly in tempdb exist even after the creating connection is terminated. As with any other object, permissions to execute the temporary stored procedure can be granted, denied, and revoked to other users.

WSDL to Java using Eclipse

Hi Friends,

I have spent almost 3 hours to find a way to convert wsdl to java and finally I have done it hence i am sharing my findings with all of you so atleast you can save your time.

1. Download eclipse if you do not have
2. Download org.apache.axis.wsdl2java.eclipse_1.1.0.1.zip , extract it and put it under plugins directory of eclipse installation.
3. Download org.apache.axis_1.1.zip , extract it and put it under plugins directory of eclipse installation.
4. Select a file container in eclipse (folder), right click to get pop-up menu and select Import



5. The first page of the import wizard will appear on screen. Select the Import WebService reference



6.Follow the instruction and press finish button

Tuesday, August 26, 2008

Optimizing the Display of Simple Tables

Microsoft’s ASP technology enables beginners to write dynamic web pages with little effort.The ADO object model hides the complexity of obtaining data from the database. However, hiding complexity under a simple interface also allows unsuspecting programmers to write wildly inefficient code. Consider the common task of querying the database and displaying the results in an HTML table.

One of the slowest methods is to loop through the recordset, and concatenate each row into a string. Once the loop is complete, the string is written to the response. Many novices may apply this technique due to its logical simplicity, or by following the bad example of others. However, for anything but very small data sets, this technique is highly innefficient. The next code example shows how this technique might be used.

SIMPLETABLEExample.ASP
========================

<%@ Language=VBScript %>
<% Option Explicit %>
Dim StartTime, EndTime
StartTime = Timer
Dim objCN ’ ADO Connection object
Dim objRS ’ ADO Recordset object
Dim strsql ’ SQL query string
Dim strTemp ’ a temporary string
’ Create a connection object
Set objCN = Server.CreateObject("ADODB.Connection")
’ Connect to the data source
objCN.ConnectionString = "DSN=datasource"
objCN.Open
’ Prepare a SQL query string
strsql = "SELECT * FROM tblData"
’ Execute the SQL query and set the implicitly created recordset
Set objRS = objCN.Execute(strsql)
’ Write out the results in a table by concatenating into a string
Response.write "<table>"
Do While Not objRS.EOF
strTemp = strTemp & "<tr><td>" & objRS("field1") & "</td>"
strTemp = strTemp & "<td>" & objRS("field2") & "</td>"
strTemp = strTemp & "<td>" & objRS("field3") & "</td>"
strTemp = strTemp & "<td>" & objRS("field4") & "</td></tr>"
objRS.MoveNext
Loop
Response.write strTemp
Response.write "</table>"
Set objCN = Nothing
Set objRS = Nothing
EndTime = Timer
Response.write "<p>processing took "&(EndTime-StartTime)&" seconds<p> "


Test Results Records Time
============= =============
1000 3.5 seconds
2000 18.4 seconds
10000 7.5 minutes (est.)
20000 30 minutes (est.)

The server processing time to display 1000 records from the table is about 3.5 seconds.Doubling the number of records to 2000 more than quadruples the time to 18.4 seconds. The script times out for the other tests, but some time estimates are given. In the code, the ’&’ concatenation operator is used heavily within the loop.

Concatenation in VBScript requires new memory to be allocated and the entire string to be copied. If the concatenation is accumulating in a single string, then an increasingly long string must be copied on each iteration. This is why the time increases as the square of the number of records. Therefore, the first optimization technique is to avoid accumulating the database results into a string.

Eliminating Concatenation From the Loop
Concatenation may be removed easily by using Response.write directly in the loop. (In ASP.Net, the StringBuilder class can be used for creating long strings, but Response.write is fastest.) By eliminating accumulation, the processing time becomes proportional to the number of records being printed, rather than being exponential.

Each use of the concatenation operator results in unnecessary memory copying. With larger recordsets or high-load servers, this time can become significant. Therefore, instead of concatenating, programmers should simply write out the data with liberal use of Response.write. The code snippet below shows that even a few non-accumulative concatenations cause a noticeable time difference when run repeatedly.

’ Using concatenation in a loop takes 1.93 seconds.
For i = 0 To 500000
Response.write vbTab & "foo" & vbCrLf
Next
’ Using multiple Response.write calls takes 1.62 seconds.
For i = 0 To 500000
Response.write vbTab
Response.write "foo"
Response.write vbCrLf
Next

Thursday, July 3, 2008

Ratproxy

Google has released for free one of its internal tools used for testing the security of Web-based applications.

Ratproxy, released under an Apache 2.0 software license, looks for a variety of coding problems in Web applications, such as errors that could allow a cross-site scripting attack or cause caching problems.

A semi-automated, largely passive web application security audit tool, optimized for an accurate and sensitive detection, and automatic annotation, of potential problems and security-relevant design patterns based on the observation of existing, user-initiated traffic in complex web 2.0 environments.

Detects and prioritizes broad classes of security problems, such as dynamic cross-site trust model considerations, script inclusion issues, content serving problems, insufficient XSRF and XSS defenses, and much more.

Ratproxy is currently believed to support Linux, FreeBSD, MacOS X, and Windows (Cygwin) environments.

Please find more details about it at: http://www.networkworld.com/news/2008/070308-google-gives-away-free-web.html?netht=rn_070308&nladname=070308dailynewspmal

Wednesday, July 2, 2008

Measuring SQL Performance

One thing that often amazes me is that many SQL Server developers do not actually measure the performance of their queries. When you are working with a small site or home project you might not see a big difference, but when implementing systems with large amounts of users and high levels of traffic you can not just settle with the fact that your query returns the expected result. You must also make sure that your queries use the least amount of resources and execute as quickly as possible. Sure, you can read articles and literature that describe how to write queries that perform well, but you can still not be sure that they work in an optimal way for your specific situation. After all, different schema designs, amounts of data, hardware resources etc all affect how a query performs. And one of the problems with SQL is that you can write the same query (i.e. that return the same results) in many different ways, and the performance of these different formulations will often differ as well. When I started investigating why some developers did not compare the performance of their queries it became clear to me that the main reason is that they do not know how to do this in an easy way. Many of them thought that you needed external tools, more or less complicated, to run against your server, and they did not have the time or inclination to learn and try these. This article will describe a couple of much easier methods of measuring performance of queries.

Time

The most simple way to measure performance is of course to measure the time it takes to execute a query. If you have not already noticed it, take a look in the status bar at the bottom-right corner of Query Analyzer. There you will find a timer that shows how many hours:minutes:seconds it took for a query (or rather the entire script) to execute. This is of course not a very exact measurement. Most queries you want to measure will probably not take more than a second to run, in a high-traffic environment they should probably execute in milliseconds if they are correctly optimized. So you need a better instrument to measure the execution time for a query.

Another, and better way to measure the amount of time it takes for a query to execute is to use the built-in function GETDATE(). Example 1 show how you can do this. The example uses the command WAITFOR to make the query execution 'stand still' for as long as we specify with DELAY. By first storing the present date and time when the execution begins and then comparing this to what it is when the execution is finished we can get a more exact measurement with milliseconds specified. Note however that the time is only specified down to 1/300 of a second (i.e. 3.33 ms). So, if a query takes 40 ms to execute that means somewhere between 40-43 ms.

-- Example 1
DECLARE @start datetime, @stop datetime
SET @start = GETDATE()
 
 
WAITFOR DELAY '00:00:00.080' -- do not do anything for 80 ms
 
 
SET @stop = GETDATE()
 
 
SELECT 'The execution took ' + CONVERT(varchar(10), DATEDIFF(ms, @start, @stop)) + ' ms to finish'

STATISTICS TIME

The best way to measure time however is to use the configuration setting SET STATISTICS TIME. The syntax for this is as shown below:

SET STATISTICS TIME {ON | OFF}

When this parameter is set to on the results pane of Query Analyzer will show statistics for the time it took to execute a query. Note that if you are running QA in grid mode you will need to switch to the Messages tab to see this. Example 2 demonstrates this:

-- Example 2
USE Northwind
GO
 
SET STATISTICS TIME ON
 
SELECT * FROM orders

In my results pane I get the following text:

SQL Server Execution Times:

  CPU time = 0 ms,  elapsed time = 0 ms.
SQL Server parse and compile time: 
 CPU time = 0 ms, elapsed time = 0 ms.
 
 
(830 row(s) affected)
 
SQL Server Execution Times:
 CPU time = 30 ms,  elapsed time = 500 ms.

At first glance this might seem complicated to understand, but more or less the only thing you need to do is to look for the row with SQL Server Execution Times that is printed right after the text that specifies the number of affected rows. Above this you can see the time it took to parse and compile the query, but that time is not what we are interested in here. Most of the times this will be 0 ms if you run the same query several times in a row since the execution plan will already be cached. As said earlier, what we are looking for is the time it took to execute the query. In the example above it needed 30 ms of CPU time, but the total amount of time needed was 500 ms (try replacing the WAITFOR statement in example 1 with the select statement in example 2 and see if GETDATE gives you the same measurement). But if CPU time was only 30 ms, then where are the remaining 470 ms? The answer for this is I/O.

STATISTICS IO

As you probably know I/O is short for Input/Output. You could say that it means reading/writing resources, and normally you mean reading/writing from/to disk or memory. Very simply described, SQL Server needs to have the data pages containing the data to return to the client stored in memory (RAM). If they are not already cached there they must first be read from disk where they are physically stored and then placed in memory, from where they can then be returned to the client. The data pages will then be cached in memory for an unspecified time, which depending on several factors can range from 0 - ~ (indefinitely). Therefore a query might need more time to execute the first time you execute it, and because of this you should always execute the query a couple of times when measuring performance for it.

It is not only the time it takes for a query to execute that is interesting when measuring performance. Equally important (and often even more) is the amount of system resources that is needed to execute it. Since I/O is normally the slowest part of a query, especially if physical disk access is needed, it is very important to know the amount of I/O resources needed to execute it. The way to measure this is to use another configuration setting called SET STATISTICS IO. The syntax for this is similar to that of SET STATISTICS TIME:

SET STATISTICS IO {ON | OFF}

The result however is different. Again, look in the text of the results pane in QA. I executed example 2 a couple of times and the result is shown below:

Table 'Orders'. Scan count 1, logical reads 22, physical reads 0, read-ahead reads 0.

First we have the table name. Then comes the number of time this table was scanned, or rather accessed, to fetch the result of the query. The next parts tells us how many pages (data and/or index) that were read from the cache in memory to fetch the results, how many pages that were read from disk and the final number called read-ahead reads shows how many pages were placed into the cache for the query. The numbers you should normally look at is logical and physical reads plus scan count, and they should all of course be as low as possible. It might be better to have 100 logical reads than 10 physical reads since it is faster to read from memory, but generally speaking they should both be as low as possible. If you execute a query a couple of times physical reads will often be 0 since the data pages will already be cached after the first execution. Use these numbers to compare the resources needed when executing the same query formulated in different ways.

Other tools

With the above mentioned tools you have a good way of deciding which of several different versions of a query you should use to get the results you want in an optimal way. There are lots of other tools available as well, but I will not discuss them in this article. If you want to experiment with them yourself I would recommend you take a look at the following tools:

  • Show execution plan: by pressing Ctrl-K you get an extra tab when executing queries in QA. This tab shows a graphical representation of the execution plan used by SQL Server to execute your query. There is lots of information in Books Online about how to use the information shown there, as well as articles online.
  • SET STATISTICS PROFILE: This configuration option gives you a textbased variant of the execution plan.
  • SET SHOWPLAN_ALL and SET SHOWPLAN_TEXT: These options both present information regarding the resources and execution plan that would be used to execute the query, without actually executing it.
  • Profiler, Sysmon (Performance Monitor) and other external applications: Finally there are several external applications that can be used to measure and show different events and measurements in SQL Server and the system under execution. Profiler, a tool in the SQL Server client tools pack, connects to SQL Server and log all kind of diffenent events that occur, and Sysmon can of course be used to log measurements for a huge amount of performance counters both for SQL Server and the system as a whole.

 

Creating a PDF from a Stored Procedure

This article explains how to create a a stored procedure that will in turn create a simple column based report in PDF without using any external tools or libraries (and their associated licensing costs!).

SQL2PDF makes a PDF report from text inserted in the table psopdf ( nvarchar(80) ). First a table named psopdf should be created.

CREATE TABLE psopdf (code NVARCHAR(80)) 

After that create the stored procedure SQL2PDF.

SQL2PDF.TXT

And table psopdf has to be filled with your data as shown in examples below.
At the end the stored procedure is called using the file name only (not extension).

EXEC sql2pdf 'fileName'

The result is in your C:\ directory.

EXAMPLE 1:

INSERT psopdf(code) SELECT SPACE(60) + 'COMPANY LTD'
INSERT psopdf(code) SELECT SPACE(60) + 'COMPANY ADDRESS'
INSERT psopdf(code) SELECT SPACE(60) + 'STREET NAME & No'
INSERT psopdf(code) SELECT ' '
INSERT psopdf(code) SELECT SPACE(34) + 'BILL OF SALE'
INSERT psopdf(code) SELECT ' '
INSERT psopdf(code) SELECT 'Product' + SPACE(10) + 'Quantity'
+ SPACE(10) + 'Price' + SPACE(10) + 'Total'
INSERT psopdf(code) SELECT REPLACE(SPACE(56), ' ', '_')
INSERT psopdf(code) SELECT 'Product1' + SPACE(9) + '10.00 '
+ SPACE(10) + '52.30' + SPACE(10) + '5230.0'
INSERT psopdf(code) SELECT 'Product2' + SPACE(9) + '2.00 '
+ SPACE(10) + '10.00' + SPACE(10) + ' 20.0'
INSERT psopdf(code) SELECT REPLACE(SPACE(56), ' ', '_')
INSERT psopdf(code) SELECT SPACE(50) + '5250.0'

After INSERT call the stored procedure with file name demo2.

EXEC sql2pdf 'demo2'

The result is in your C:\ directory.

http://www.sqlservercentral.com/columnists/mivica/pdfdemo2.jpg

EXAMPLE 2:

Second example uses a database pubs.

USE pubs
INSERT psopdf(code) SELECT t1.au_lname + ' ' + t1.au_fname + ' ' + t1.phone 
  +
 ' ' + t1.address + ' ' + t1.city + ' ' + t1.state + ' ' + t1.zip FROM 
 authors t1, authors t2

After INSERT call the stored procedure with file name demo1.

EXEC sql2pdf 'demo1'

>The result is in your C:\ directory.

http://www.sqlservercentral.com/columnists/mivica/pdfdemo1.jpg

 

The Myth of Bandwidth and Application Performance

Deduping Data in SQL Server 2005

MS Excel and MS Word shortcuts

XML Injection

One more type of Injection we need to be aware of since we are using XML for data transfer in most of our applications:

 

XML Injection:

 

Click here to get some more information:

http://www.owasp.org/index.php/Testing_for_XML_Injection

How to add sitemap.xml for BLOGSPOT.COM

Please follow below mentioned steps in order to add a sitemap.xml for BLOGSPOT.COM. At this movement I am assuming that you have already verified your blog website adding Google provided meta tag on your home page of blog.

 

1.       Goto Google webmaster tools URL: www.google.com/webmasters/tools

 

2.       Enter your gmail username and password

 

3.       Click on the website link for which you want to add sitemap.xml file

 

4.       Click on Sitemap link

 

5.       Click on Add a sitemap

 

6.       Select “Add General Web Sitemap” option from the drop down menu

 

7.       Enter file name as “ATOM.XML”

 

8.       Click on “Add General Sitemap Button”

 

9.       That’s it J you are done. Google will verified your Atom.xml file within few hours…

Gmail Rolls Out 13 New Features

Gmail rocks and all of you knew it already, but even so, users always want to see more features from Google's mail service. Regardless if we're talking about folders or a different UI, people have always requested new functions. Because of that, Google has recently rolled out Gmail Labs, a special testing platform which incorporates test versions of the upcoming Gmail features which could be used by all users of the mail service.

"People often ask how we decide what to build next. It's usually a mix of factors, like how many users are asking for it (think delete button, vacation responder, and IMAP, among others), how useful we think it will be (think chat, conversation view, etc.) or how much fun it will be to work on (this is actually really important). We have all sorts of debates about each option, we weigh the pros and cons, and then some of the time we probably make the wrong decision," Keith Coleman, Product Manager, mentioned the reasons for implementing the new platform.

In other words, Gmail Labs is a testing platform designed by Google based on your feedback. If you and lots of other users request the same feature, Google designs it and integrates into Gmail Labs, in order to see what other people think about it. In case positive feedback is received, Google may bring the functionality in final version and upgrade it from Gmail Labs.

"The idea behind Labs is that any engineer can go to lunch, come up with a cool idea, code it up, and ship it as a Labs feature," the Google official explains. "Labs is now out to all English users (US and UK), and administrators using Google Apps can choose to enable Labs by checking the `Turn on new features' box in Domain Settings."

At this point, no less than 13 functions are available under the Gmail Lab link included in all Gmail accounts. All you need to do is to navigate to Settings and click on the last tab called Labs. Here you can find all the available experimental features. For instance, you can now choose new chat emoticons, mouse gestures for Gmail, custom keyboard shortcuts, custom date formats and many others. Obviously, the main goal of this testing platform is actually to get feedback, so feel free to send your opinion about the new features to Google.

Tuesday, June 3, 2008

Excel file download on MAC

In order to download the .xls file on MAC machine, we need to set following content type in the response object.

 

Response.AddHeader "Content-Disposition", "attachment;filename=" & strgroup & ".csv;"

 

Please note: “attachment;”

 

Wednesday, May 14, 2008

JavaScript Trim function

function trim(s) {

  while (s.substring(0,1) == ' ') {

    s = s.substring(1,s.length);

  }

  while (s.substring(s.length-1,s.length) == ' ') {

    s = s.substring(0,s.length-1);

  }

  return s;

}

JavaScript Replace All function

 

// This function replaces all instances of findStr in oldStr with repStr.

function replaceAll(oldStr,findStr,repStr) {

  var srchNdx = 0;  // srchNdx will keep track of where in the whole line

                                    // of oldStr are we searching.

  var newStr = "";  // newStr will hold the altered version of oldStr.

  while (oldStr.indexOf(findStr,srchNdx) != -1)

                                    // As long as there are strings to replace, this loop

                                    // will run.

  {

    newStr += oldStr.substring(srchNdx,oldStr.indexOf(findStr,srchNdx));

                                    // Put it all the unaltered text from one findStr to

                                    // the next findStr into newStr.

    newStr += repStr;

                                    // Instead of putting the old string, put in the

                                    // new string instead.

    srchNdx = (oldStr.indexOf(findStr,srchNdx) + findStr.length);

                                    // Now jump to the next chunk of text till the next findStr.

  }

  newStr += oldStr.substring(srchNdx,oldStr.length);

                                    // Put whatever's left into newStr.

  return newStr;

}

Sunday, May 11, 2008

Cookie.isEnabled

Please find more about it at: http://www.quirksmode.org/js/cookies.html

Oracle Java JDBC Driver and Sample Code

Basics of Google Web Toolkit

  • Google Web Toolkit (GWT) is an open source Java development framework.
  • You can develop and debug AJAX applications in the Java language using the Java development tools of your choiceEclipse or Netbean
  • When you deploy your application to production, the GWT compiler translates your Java application to browser-compliant JavaScript and HTML
  • GWT is a Java TO JavaScript compiler
Please find more detail about GWT at: http://docs.google.com/Presentation?id=dgchtzmz_4p72zjzdr