Archive for ‘TechNews’

April 10, 2008

Google Jumps Head First Into Web Services With Google App Engine

Google App Engine is designed for developers who want to run their entire application stack, soup to nuts, on Google resources. Amazon, by contrast, offers more of an a la carte offering with which developers can pick and choose what resources they want to use.

Google Product Manager Tom Stocky described the new service to me in an interview today. Developers simply upload their Python code to Google, launch the application, and can monitor usage and other metrics via a multi-platform desktop application.

More details from Google:

Today we’re announcing a preview release of Google App Engine, an application-hosting tool that developers can use to build scalable web apps on top of Google’s infrastructure. The goal is to make it easier for web developers to build and scale applications, instead of focusing on system administration and maintenance.

Leveraging Google App Engine, developers can:

  • Write code once and deploy. Provisioning and configuring multiple machines for web serving and data storage can be expensive and time consuming. Google App Engine makes it easier to deploy web applications by dynamically providing computing resources as they are needed. Developers write the code, and Google App Engine takes care of the rest.
  • Absorb spikes in traffic. When a web app surges in popularity, the sudden increase in traffic can be overwhelming for applications of all sizes, from startups to large companies that find themselves rearchitecting their databases and entire systems several times a year. With automatic replication and load balancing, Google App Engine makes it easier to scale from one user to one million by taking advantage of Bigtable and other components of Google’s scalable infrastructure.
  • Easily integrate with other Google services. It’s unnecessary and inefficient for developers to write components like authentication and e-mail from scratch for each new application. Developers using Google App Engine can make use of built-in components and Google’s broader library of APIs that provide plug-and-play functionality for simple but important features.

Google App Engine: The Limitations

The service is launching in beta and has a number of limitations.

First, only the first 10,000 developers to sign up for the beta will be allowed to deploy applications.

The service is completely free during the beta period, but there are ceilings on usage. Applications cannot use more than 500 MB of total storage, 200 million megacycles/day CPU time, and 10 GB bandwidth (both ways) per day. We’re told this equates to about 5M pageviews/mo for the typical web app. After the beta period, those ceilings will be removed, but developers will need to pay for any overage. Google has not yet set pricing for the service.

One current limitation is a requirement that applications be written in Python, a popular scripting language for building modern web apps (Ruby and PHP are among others widely used). Google says that Python is just the first supported language, and that the entire infrastructure is designed to be language neutral. Google’s initial focus on Python makes sense because they use Python internally as their scripting language (and they hired Python creator Guido van Rossum in 2005).


January 14, 2008

Microsoft Silverlight

Microsoft Silverlight is the next generation, cross browser web client, cross platform runtime. Silverlight is a lightweight subset of XAML for building rich media experiences on the web.

November 20, 2007

Open Source Web Design

download free web design templates and share yours with others:

September 5, 2007

10 tech skills you should develop during the next five years

If you want a job where you can train in a particular skill set and then never have to learn anything new, IT isn’t the field for you. But if you like to be constantly learning new things and developing new skills, you’re in the right business. In the late 80s, NetWare and IPX/SPX administration were the skills to have. Today, it’s all about TCP/IP and the Internet.

Let’s take a look at some of the skills you should be thinking about developing to keep on top of things in the tech world in the next five years.

#1: Voice over IP

Many companies and consumers are already using VoIP for telephone services due to cost and convenience factors. According to a article in June 2007, sales of pure IP PBX systems for the first quarter of 2007 increased 76% over the first quarter of the previous year.

More and more companies are expected to go to VoIP, to either supplement or replace their traditional phone lines. And because VoIP runs on the TCP/IP network, IT administrators will in many cases be expected to take responsibility for VoIP implementation and ongoing administration.

#2: Unified communications

Along with the growing popularity of VoIP, the concept of unified communications — the convergence of different communications technologies, such as e-mail, voicemail, text messaging, and fax — looks to be the wave of the future. Users will expect to have access to all their communications from a single interface, such as their Inbox, and from a variety of devices: PCs, laptops, smart phones/PDAs, traditional phones, etc.

Convergence makes networks more complex, and IT administrators will need to develop skills for managing converged networks to compete in tomorrow’s job market.

#3: Hybrid networks

The day of the all-Windows or all-UNIX network is already past, and networks are likely to grow more, rather than less hybridised in the future. As new versions of Linux, such as Ubuntu, become friendlier for end users, we’re likely to see some organisations deploying it on the desktop for certain users. However, it’s likely that other users will continue to use Windows because of application requirements and/or personal preferences, and there may very well be Macintosh users in the mix as well, especially in graphics environments.

IT pros will no longer be able to get by with expertise in only one platform; you’ll need to be able to support and troubleshoot different operating systems.

#4: Wireless technology

Wireless networking is still in its infancy in the enterprise. Companies are (often grudgingly) establishing wireless LANs for the use of employees and visitors because it’s the most convenient way for portable computers to connect to the network, but many organisations are still wary of wireless (rightly so), particularly its security implications.

But wireless isn’t going away, and the future promises faster and more secure wireless technologies. You’ll need to know about 802.11n, a new standard now in development and estimated to be released in late 2008, which will provide for a typical throughput of 74 Mbps with a theoretical maximum data rate of 248 Mbps and a longer range than current 802.11a/b/g standards (about 70 meters, or approximately 230 feet).

#5: Remote user support

The trend is toward more employees working off-site: executives taking their laptops on the road, telecommuters working from home at least a few days per week, personnel in the field connecting back to the LAN, and so forth. The IT staff will need to be able to support these remote users while maintaining the security of the internal network.

It will be important to learn skills relating to different VPN technologies (including SSL VPN) and technologies for health monitoring and quarantining of remote clients to prevent those that don’t meet minimal criteria (antivirus installed and updated, firewall enabled, etc.) from connecting to the LAN and putting the rest of the network at risk.

#6: Mobile user support

Cell phones, Blackberries, and other ultra-portable devices are becoming ubiquitous and will likely grow more sophisticated in the future. Employees will expect to get their corporate e-mail on their phones and in some cases (such as Windows Mobile devices), to use terminal services client software to connect these small devices to the company LAN.

IT staff members will need to develop a plethora of skills to support mobile users, including expertise in configuration of mail servers and knowledge of security implications of the devices.

#7: Software as a service

Web 2.0, the next generation of the Internet, is all about SaaS, or Software as a Service. SaaS involves delivering applications over the Web, rather than installing those applications on individual users’ machines. Some IT pundits have warned that SaaS will do away with IT administrators’ jobs entirely, but the more likely scenario is that the job description will change to one with less focus on deployment and maintenance of applications and more emphasis on broader-based planning, convergence, etc.

If SaaS takes off, the job market may also shift so that more jobs are concentrated in the application provider sector rather than in companies’ in-house IT departments. In that situation, IT pros who have the skills relating to service provision and multi-tenant architecture will have a head start when it comes to getting and staying employed.

#8: Virtualisation

Virtualisation has been around for a while, but now, with Microsoft heavily investing in the technology with its Windows hypervisor (Viridian), which will run on Windows Server 2008, VMWare offering VMWare Server for free, and Red Hat and SuSE planning to include Xen hypervisor technology in the next versions of their server products, we can expect the concept of virtual machines to go to a whole new level in the next few years.

Managing a VM-based network environment is a skill that will be not just handy, but essential, as more and more companies look to virtualisation to consolidate servers and save on hardware costs.

#9: IPv6

Widespread adoption of the next generation of the Internet Protocol (IPv6) hasn’t come about as quickly as originally predicted, in large part because technologies such as NAT prevented the depletion of available IP addresses from happening as soon as anticipated.

However, with the number of hosts on the Internet growing steadily, the larger address space will eventually be critical to further expansion. IPv6 also offers better security with IPsec, a part of the basic protocol suite. Perhaps the inevitability of the transition is best indicated by the fact that Windows Vista, Windows Server 2008, Mac OS X 10.3, and the latest versions of other operating systems have IPv6 enabled by default.

With an entirely different address notation, called CIDR, and addresses written in hexadecimal instead of the familiar four octets of decimal numbers used by IPv4, there will be a learning curve for IT administrators. The time to tune up your IPv6 skills is now, before the transition becomes mandatory.

#10: Security

Smart IT pros have been developing their security skills for the last several years, but the future will bring new security challenges and new security mechanisms. Technologies such as VoIP and mobile computing bring new security issues and challenges. Authentication methods are evolving from a password-based model to multifactor models, and biometrics are likely to become more important in the future.

As threats become more sophisticated, shifting from teenage hackers defacing Web sites “just for fun” to well financed corporate espionage agents and cyberterrorists bent on bringing down the country’s vital infrastructure by attacking the networks that run it, security skills must keep up.

In addition to proactive measures, IT pros will need to know more about computer forensics and be able to track what is happening and has happened on their networks.

February 2, 2007

Mobiles to get portable hard drives

The first portable hard drives for phones are coming onto the market, promising a “multimedia revolution”.

Seagate launched its Digital Audio Video Experience (Dave) range of mobile hard drives on 30th Jan 2007, which consist of a 1cm thick credit card sized unit storing 10GB to 20GB of storage and communicating with phones via Bluetooth.

The devices have a range of around 30ft and battery life of 14 days standby or 10 hours use. No pricing details have yet been released.

“Mobile telephony is undergoing a multimedia revolution, and the Dave mobile content platform will provide even more fuel for the growth of new music and video services over mobile networks,” said Patrick King, senior vice president and general manager of Seagate’s consumer electronics business unit.

“Products using Dave will enable digital content for business or entertainment to be stored, moved and connected in ways never before possible.”

But Seagate has been beaten to the punch on the technology by Agere Systems with a similar device, the BlueOnyx, that can communicate via Bluetooth or Wi-Fi and can store up to 40GB.
Models are also available with USB and SD card ports, and costs range from £50 to £130.
“We wanted to create a highly mobile device that solves a lot of the connectivity issues consumers have while giving them all the storage they want at an affordable price,” said Ruediger Stroh, general manager of Agere’s storage division.

“The capability will finally make the PC just another consumer device instead of the centre of the digital universe.”

Source: vnunet

June 27, 2006

What is a Packet Sniffer?

A packet sniffer is a device or program that allows eavesdropping on traffic traveling between networked computers. The packet sniffer will capture data that is addressed to other machines, saving it for later analysis.

All information that travels across a network is sent in “packets.” For example, when an email is sent from one computer to another, it is first broken up into smaller segments. Each segment has the destination address attached, the source address, and other information such as the number of packets and reassembly order. Once they arrive at the destination, the packet’s headers and footers are stripped away, and the packets reconstituted.

In the example of the simplest network where computers share an Ethernet wire, all packets that travel between the various computers are “seen” by every computer on the network. A hub broadcasts every packet to every machine or node on the network, then a filter in each computer discards packets not addressed to it. A packet sniffer disables this filter to capture and analyze some or all packets traveling through the ethernet wire, depending on the sniffer’s configuration. This is referred to as “promiscuous mode.” Hence, if Ms. Wise on Computer A sends an email to Mr. Geek on Computer B, a packet sniffer set up on Computer D could passively capture their communication packets without either Ms. Wise or Mr. Geek knowing. This type of packet sniffer is very hard to detect because it generates no traffic of its own.

A slightly safer environment is a switched Ethernet network. Rather than a central hub that broadcasts all traffic on the network to all machines, the switch acts like a central switchboard. It receives packets directly from the originating computer, and sends them directly to the machine to which they are addressed. In this scenario, if Computer A sends an email to Computer B, and Computer D is in promiscuous mode, it still won’t see the packets. Therefore, some people mistakenly assume a packet sniffer cannot be used on a switched network.

But there are ways to hack the switch protocol. A procedure called ARP poisoning basically fools the switch to substituting the machine with the packet sniffer for the destination machine. After capturing the data, the packets can be sent to the real destination. The other technique is to flood the switch with MAC (network) addresses so that the switch defaults into “failopen” mode. In this mode it starts behaving like a hub, transmitting all packets to all machines to make sure traffic gets through. Both ARP poisoning and MAC flooding generate traffic signatures that can be detected by packet sniffer detection programs.

A packet sniffer can also be used on the Internet to capture data traveling between computers. Internet packets often have very long distances to travel, passing through several routers that act like intermediate post offices. A packet sniffer might be installed at any point along the way. It could also be clandestinely installed on a server that acts as a gateway or collects vital personal information.

A packet sniffer is not just a hacker’s tool. It can be used for network troubleshooting and other useful purposes. However, in the wrong hands, a packet sniffer can capture sensitive personal information that can lead to invasion of privacy, identity theft, and other serious eventualities.
The best defense against a packet sniffer is a good offense: encryption. When strong encryption is used, all packets are unreadable to any but the destination address, making packet sniffers useless. They can still capture packets, but the contents will be undecipherable. This illustrates why it is so important to use secure sites to send and receive personal information, such as name, address, passwords, and certainly any credit card information or other sensitive data. A website that uses encryption starts with https. Email can be made secure by encrypting with a program like PGP (Pretty Good Privacy), which comes with seamless plug-ins for all major email programs.

June 27, 2006

What is SSL (Secure Sockets Layer)?

SSL or Secure Sockets Layer is a security protocol created by Netscape that has become an international standard on the Internet for exchanging sensitive information between a website and the computer communicating with it, referred to as the client.

SSL technology is embedded in all popular browsers and engages automatically when the user connects to a
web server that is SSL-enabled. It’s easy to tell when a server is using SSL security because the address in the URL window of your browser will start with https. The “s” indicates a secure connection.

When your browser connects to an SSL server, it automatically asks the server for a digital Certificate of Authority (CA). This digital certificate positively authenticates the server’s identity to ensure you will not be sending sensitive data to a hacker or imposter site. The browser also makes sure the domain name matches the name on the CA, and that the CA has been generated by a trusted authority and bears a valid digital signature. If all goes well you will not even be aware this handshake has taken place.

However, if there is a glitch with the CA, even if it is simply out of date, your browser will pop up a window to inform you of the exact problem it encountered, allowing you to end the session or continue at your own risk.

Once the handshake is completed, your browser will automatically encrypt all information that you send to the site, before it leaves your computer. Encrypted information is unreadable en route. Once the information arrives at the secure server, it is decrypted using a secret key. If the server sends information back to you, that information is also encrypted at the server’s end before being sent. Your browser will decrypt it for you automatically upon arrival, then display it as it normally does.
For those running a secure server it is also possible to authenticate the client connecting to the server to ensure, for example, that the person is not pretending to be someone who has been granted restricted access. Another feature of SSL technology is the ability to authenticate data so that an interceder cannot substitute another transmission for the actual transmission without being detected.

Though SSL makes exchanging sensitive information online secure, it cannot guarantee that the information will continue to be kept secure once it arrives safely at the server. For assurance that sensitive information is handled properly once it has been received, you must read the site’s privacy policy. It does little good to trust your personal data to SSL, if the people who ultimately have it will be sharing it with third parties, or keeping it on servers that are not bound by restricted access and other security protocols. Therefore it is always wise to read any site’s privacy policy, which includes security measures, before volunteering your personal information online.

June 27, 2006

What are Computer Cookies?

A computer cookie is a small text file which contains a unique ID tag, placed on your computer by a website. The website saves a complimentary file with a matching ID tag. In this file various information can be stored, from pages visited on the site, to information voluntarily given to the site. When you revisit the site days or weeks later, the site can recognize you by matching the cookie on your computer with the counterpart in its database.

There are two types of cookies: temporary and permanent.

Temporary cookies, also called session cookies, are stored temporarily in your browser’s memory and are deleted as soon as you end the session by closing the browser.

Permanent cookies, also called persistent cookies, are stored permanently on your computer’s hard drive and, if deleted, will be recreated the next time you visit the sites that placed them there.

Cookie technology addressed the need to keep track of information entered at a site so that if you submitted a registration form for example, the site could associate that information with you as you traveled through the site’s pages. Otherwise, every time you clicked on a different page in the site, establishing a new connection, the site would lose the information in reference to you, and would have to ask you for it again.

A temporary cookie solved this problem in the short term by setting aside a little bit of your browser’s memory to make a “folder” to save information for you. But temporary cookies were lost as soon as you closed your browser. You were not recognized on subsequent visits.

Persistent cookies solved this problem. They allowed a site to recognize you permanently by transferring a text file to your computer with a unique ID tag, matching a complimentary file on the server. Now cookies could persist for years.

Both temporary and permanent cookies can be used for many helpful purposes. Automatic registration log-on, preserving website preferences, and saving items to a shopping cart are all examples of cookies put to good use.

But permanent cookies resulted in unanticipated uses as well.

Many websites began keeping track of when an individual visited, what pages were viewed, and how long the visitor stayed. This information was stored in the visitor’s cookie. When he returned, the log of previous visits to the site was immediately known, and the new visit was added to his log. If the visitor ever offered personal information at the site, his real identity, address and other personal information was associated with the anonymous ID tag. Website profiling was born.

Marketers had an even more unique advantage. A given marketer may have advertising rights on several hundred or even many thousands of the most popular websites. In this way the marketer can pass cookies to surfers on countless sites, then recognize a surfer’s unique ID tag whenever he or she visits one of their affiliated sites. In this way the marketer can track someone across the web, from site to site, logging a comprehensive profile of the individual’s surfing habits over a period of months and even years. Sophisticated profiling programs then sort the data provided by the cookie to categorize the target in several different areas, based on statistical data. Gender, race, income level, political leanings, religious affiliation and even sexual orientation can all be determined with various degrees of accuracy through cookie profiling. Much depends on how much a person surfs, and where they choose to go online.

As a result of public outcry in response to surreptitious profiling, cookie controls were placed in post 3.x browsers to allow users to turn cookies off — options that were not available in 1995 when permanent cookie technology was first embedded into browsers without public awareness or knowledge of how they could be used. Third-party cookies often have their own controls, as they are normally cookies placed by marketers that are used for profiling.

Cookie controls also allow user-created lists for exceptions, so that one can turn cookies off, for example, but exempt sites where cookies are put to a useful purpose.

The name “cookie” comes from fortune cookie, because of the hidden information inside.

June 27, 2006

What is RSS (Really Simple Syndication)?

RSS or Really Simple Syndication is a useful tool for keeping updated on your favorite websites. RSS makes use of an XML code that constantly scans the content of a website for updates and then broadcasts those updates to all subscribers through a feed.

RSS feeds are typically used with news sites or blogs, although any website can use them to disseminate information. When an update is sent out, it includes a headline and a small amount of text, either a summary or the lead-in to the larger story. You will need to click a link to read more.

In order to receive RSS feeds, you must have an aggregator, a feed reader. There are a number of aggregators online, many of them free, so with a little bit of searching, you should be able to find an interface that appeals to you. In addition to being available on your computer, RSS feeds can also be read on PDAs and cell phones.

When you come across a website you would like to add to your aggregator, you can do so in one of two ways. Most sites that offer an RSS feed have an “RSS” or “XML” button on their homepage that you can click on and it will instantly add that feed to your aggregator. Depending on your aggregator, you may instead need to copy and paste the URL of the feed into the program.

By either method, the feed will be available as soon as you’ve added it, and your next update could arrive in seconds. If you ever decide that you don’t want to receive updates anymore, you simply delete the feed or URL from your aggregator.

Perhaps you already receive information on website updates through some sort of e-mail newsletter. RSS feeds are preferable to newsletter updates because they are instantaneous; you don’t have to wait until a designated day of the week to receive your summary. They will also never be held up by a spam filter.

RSS feeds are used daily by the people who realize the convenience of up-to-the-minute news and reports and the time they can save reading only those updates interested for them rather digging into older stuff again and again, and they look to become even more popular in the future.

April 22, 2006

What is managed code?

Recently I have been working on pulling together some background information just to improve my knowledge bit further and I thought I’d share it here.

What is managed code?

Managed code is code that has its execution managed by the .NET Framework Common Language Runtime. It refers to a contract of cooperation between natively executing code and the runtime. This contract specifies that at any point of execution, the runtime may stop an executing CPU and retrieve information specific to the current CPU instruction address. Information that must be query-able generally pertains to runtime state, such as register or stack memory contents.

The necessary information is encoded in an Intermediate Language (IL) and associated metadata, or symbolic information that describes all of the entry points and the constructs exposed in the IL (e.g., methods, properties) and their characteristics. The Common Language Infrastructure (CLI) Standard (which the CLR is the primary commercial implementation) describes how the information is to be encoded, and programming languages that target the runtime emit the correct encoding. All a developer has to know is that any of the languages that target the runtime produce managed code emitted as PE files that contain IL and metadata. And there are many such languages to choose from, since there are nearly 20 different languages provided by third parties – everything from COBOL to Camel – in addition to C#, J#, VB .Net, Jscript .Net, and C++ from Microsoft.

Before the code is run, the IL is compiled into native executable code. And, since this compilation happens by the managed execution environment (or, more correctly, by a runtime-aware compiler that knows how to target the managed execution environment), the managed execution environment can make guarantees about what the code is going to do. It can insert traps and appropriate garbage collection hooks, exception handling, type safety, array bounds and index checking, and so forth. For example, such a compiler makes sure to lay out stack frames and everything just right so that the garbage collector can run in the background on a separate thread, constantly walking the active call stack, finding all the roots, chasing down all the live objects. In addition because the IL has a notion of type safety the execution engine will maintain the guarantee of type safety eliminating a whole class of programming mistakes that often lead to security holes.

Contrast this to the unmanaged world: Unmanaged executable files are basically a binary image, x86 code, loaded into memory. The program counter gets put there and that’s the last the OS knows. There are protections in place around memory management and port I/O and so forth, but the system doesn’t actually know what the application is doing. Therefore, it can’t make any guarantees about what happens when the application runs

Managed code is code executed by a .NET virtual machine, such as Microsoft‘s .NET Framework Common Language Runtime, The Mono Project, or DotGNU Project.

In a
Microsoft Windows environment, all other code has come to be known as unmanaged code. In non-Windows and mixed environments, managed code is sometimes used more generally to refer to any interpreted programming language.

Managed refers to a method of exchanging information between the program and the
runtime environment. It is specified that at any point of execution, the runtime may stop an executing CPU and retrieve information specific to the current CPU instruction address. Information that must be accessible generally pertains to runtime state, such as processor register or stack memory contents.

The necessary information is then encoded in
Common Intermediate Language (formerly known as Microsoft Intermediate Language) and associated metadata.

Before the code is run, the Intermediate Language is compiled into native
machine code. Since this compilation happens by the managed execution environment’s own runtime-aware compiler, the managed execution environment can guarantee what the code is going to do. It can insert garbage collection hooks, exception handling, type safety, array bounds, index checking, etc.

This is traditionally referred to as
Just-in-time compilation. However, unlike most traditional just in time compilers, the file that holds the pseudo machine code that the virtual machine compiles into native machine code can also contain pre-compiled binaries for different native machines (eg x86 and PowerPC). This is similar in concept to the Apple Universal binary format.

March 13, 2006

Office "12"

The 2007 Microsoft Office release, available by the end of 2006, is an integrated system of programs, servers, and services that will help you meet your business and personal needs. Work more efficiently, stay organized, and more easily collaborate and share information using the security-enhanced 2007 Microsoft Office system.

Register to get the latest news about the 2007 Microsoft Office release, formerly code-named Office “12”, including notification when Beta 2 is available.

By default, documents created in the next release of Microsoft Office products will be based on new, XML-based file formats. Distinct from the binary-based file format that has been a mainstay of past Microsoft Office releases, the new Office XML Formats are compact, robust file formats that enable better data integration between documents and back-end systems. An open, royalty-free file format specification maximizes interoperability in a heterogeneous environment, and enables any technology provider to integrate Microsoft Office documents into their solutions.

The new Office XML Formats introduce a number of benefits not only for developers and the solutions they build, but also for individual users and organizations of all sizes.

February 28, 2006



The assembly is the building block of a VB.NET application. Basically, an assembly is a collection of the types and resources that are built together to provide functionality to an application.

· Assemblies look like dynamic link libraries (.dll) or executable programs (.exe).
· Differ from .exe and .dll files in that they contain the information found in a type library plus the information about everything else needed to use an application or component.
· Includes a mix of Microsoft Intermediate Language (IL) and machine code.
· Invokes the CLR by machine code found in the first several bytes of an assembly file.
· Contains one or more files.
· An application can be composed of one or more assemblies.
· An assembly can be a single portable executable (PE) file (like an .exe or .dll) or multiple PE files and other external resource files such as bitmap files.

Assemblies store metadata (data about the application) and include:

§ Information for each public class or type used in the assembly – information includes class or type names, the classes from which an individual class is derived, etc.
§ Information on all public methods in each class – includes the method name and any return values.
§ Information on every public parameter for each method – includes the parameter name and type.
§ Information on public enumerations including names and values.
§ Information on the assembly version (each assembly has a specific version number).
§ Intermediate language code to execute.
§ Required resources such as pictures, assembly metadata – also called the assembly manifest (the assembly title, description, version information, etc).

Multiple versions of an assembly can run simultaneously on the same client computer. This aids with compatibility with previous versions.

Assemblies shared by multiple applications on a client computer can be installed into the global assembly cache of Windows – this enhances security because only users with Administrator privileges on the machine can delete from the global assembly cache.

Strong-Named Assemblies

The strong-named assembly concept is used to guarantee the uniqueness of an assembly name – unique names are generated with the use of public and private key pairs when an assembly is compiled.

Applications can generally only run with the assembly version with which they are originally compiled. In order to update a component (such as a DLL for a control you’ve created), a publisher policy file is used to redirect an assembly binding request to a new version.

The .NET framework checks the integrity of strong-named assemblies to ensure they have not been modified since they were last built. This prevents unauthorized modification before loading.

The .NET framework creates a strong-named assembly by combining the assembly identify (name, version, and culture information) with a public key and digital signature.

As the programmer, you must generate the strong name key file (.snk filename extension) that contains the public-private key pair by using the Strong Name (Sn.Exe) utility or Visual Basic.NET IDE (this latter choice is usually the approach taken – simply involves clicking the right block during the building of an assembly). In fact, a project’s property page has a Strong Name section to automatically generate a strong name key file to add to a project. The public key is inserted into an assembly at compile time. The private key is used to sign the assembly.

Versioning Strong-Named Assemblies

This shows an example publisher policy file written in XML. This file would be compiled for shipment with a new component version using the Assembly Generation tool (Al.Exe). This signs the assembly with the strong name used originally with the first version in order to confirm that the new component is from a valid source.

(assemblyIdentity name = “myassembly”
(codeBase version=”″

Ooops 😦 i was unable to write less than sign and greater than sign in this blog so i had to use
( – less than sign
) – greater than sign.

The publicKeyTokey attribute is a hexadecimal value that identifies the strong name of the assembly.