RSS

Pages

Tuesday, March 16, 2010

About Me

Wild Reindeer is a web log (blog) which I created on March 2010. Which it contains topic discuss with new and old technologies within us. I had created this blog as a purpose of assingment CSC 2211 conducted by Mr Eric Sow. This blog templates modified from http://www.ipietoon.com/ by me.

I found that creating a blog is not that hard, but modifying and maintaining a blog consume a lots of times and effords. I found that the most difficult part is how can a designer create a blog to impress their users. I am way far to go.

By creating this blog, i learnt some technique in arranging the templates of a blog or an webside by understand of HTML and CSS. Also, this assingment help me to get further understand of how to make use of HTML codes in many platform such as blogger, creating a webside and also in many of the forums (discussion board).

This blog is own by Clover Yew Lih Hwan.

Contacts:
Clover
017-6070588
INTI University College Nilai

Sunday, March 14, 2010

Cloud Security

Cloud Security Challenges



John W. Rittinghouse and James F. Ransome

Although virtualization and cloud computing can help companies accomplish more by breaking the physical bonds between an IT infrastructure and its users, heightened security threats must be overcome in order to benefit fully from this new computing paradigm. This is particularly true for the SaaS provider. Some security concerns are worth more discussion. For example, in the cloud, you lose control over assets in some respects, so your security model must be reassessed. Enterprise security is only as good as the least reliable partner, department, or vendor. Can you trust your data to your service provider? This excerpt discusses some issues you should consider before answering that question.

With the cloud model, you lose control over physical security. In a public cloud, you are sharing computing resources with other companies. In a shared pool outside the enterprise, you don't have any knowledge or control of where the resources run. Exposing your data in an environment shared with other companies could give the government "reasonable cause" to seize your assets because another company has violated the law. Simply because you share the environment in the cloud, may put your data at risk of seizure. Storage services provided by one cloud vendor may be incompatible with another vendor's services should you decide to move from one to the other. Vendors are known for creating what the hosting world calls "sticky services;" services that an end user may have difficulty transporting from one cloud vendor to another (e.g., Amazon's "Simple Storage Service" [S3] is incompatible with IBM's Blue Cloud, or Google, or Dell).

If information is encrypted while passing through the cloud, who controls the encryption/decryption keys? Is it the customer or the cloud vendor? Most customers probably want their data encrypted both ways across the Internet using SSL (Secure Sockets Layer protocol). They also most likely want their data encrypted while it is at rest in the cloud vendor's storage pool. Be sure that you, the customer, control the encryption/decryption keys, just as if the data were still resident on your own servers.

Data integrity means ensuring that data is identically maintained during any operation (such as transfer, storage, or retrieval). Put simply, data integrity is assurance that the data is consistent and correct. Ensuring the integrity of the data really means that it changes only in response to authorized transactions. This sounds good, but you must remember that a common standard to ensure data integrity does not yet exist.

Using SaaS offerings in the cloud means that there is much less need for software development. For example, using a web-based customer relationship management (CRM) offering eliminates the necessity to write code and "customize" a vendor's application. If you plan to use internally developed code in the cloud, it is even more important to have a formal secure software development life cycle (SDLC). The immature use of mashup technology (combinations of web services), which is fundamental to cloud applications, is inevitably going to cause unwitting security vulnerabilities in those applications. Your development tool of choice should have a security model embedded in it to guide developers during the development phase and restrict users only to their authorized data when the system is deployed into production.

As more and more mission-critical processes are moved to the cloud, SaaS suppliers will have to provide log data in a real-time, straightforward manner, probably for their administrators as well as their customers' personnel. Someone has to be responsible for monitoring for security and compliance, and unless the application and data are under the control of end users, they will not be able to. Will customers trust the cloud provider enough to push their mission-critical applications out to the cloud? Since the SaaS provider's logs are internal and not necessarily accessible externally or by clients or investigators, monitoring is difficult. Since access to logs is required for Payment Card Industry Data Security Standard (PCI DSS) compliance and may be requested by auditors and regulators, security managers need to make sure to negotiate access to the provider's logs as part of any service agreement.

Cloud applications undergo constant feature additions, and users must keep up to date with application improvements to be sure they are protected. The speed at which applications will change in the cloud will affect both the SDLC and security. For example, Microsoft's SDLC assumes that mission-critical software will have a three- to five-year period in which it will not change substantially, but the cloud may require a change in the application every few weeks. Even worse, a secure SLDC will not be able to provide a security cycle that keeps up with changes that occur so quickly. This means that users must constantly upgrade, because an older version may not function, or protect the data.

Having proper fail-over technology is a component of securing the cloud that is often overlooked. The company can survive if a non-mission-critical application goes offline, but this may not be true for mission-critical applications. Core business practices provide competitive differentiation. Security needs to move to the data level, so that enterprises can be sure their data is protected wherever it goes. Sensitive data is the domain of the enterprise, not the cloud computing provider. One of the key challenges in cloud computing is data-level security.

Most compliance standards do not envision compliance in a world of cloud computing. There is a huge body of standards that apply for IT security and compliance, governing most business interactions that will, over time, have to be translated to the cloud. SaaS makes the process of compliance more complicated, since it may be difficult for a customer to discern where its data resides on a network controlled by its SaaS provider, or a partner of that provider, which raises all sorts of compliance issues of data privacy, segregation, and security. Many compliance regulations require that data not be intermixed with other data, such as on shared servers or databases. Some countries have strict limits on what data about its citizens can be stored and for how long, and some banking regulators require that customers' financial data remain in their home country.

Compliance with government regulations such as the Sarbanes-Oxley Act (SOX), the Gramm-Leach-Bliley Act (GLBA), and the Health Insurance Portability and Accountability Act (HIPAA), and industry standards such as the PCI DSS, will be much more challenging in the SaaS environment. There is a perception that cloud computing removes data compliance responsibility; however, it should be emphasized that the data owner is still fully responsible for compliance. Those who adopt cloud computing must remember that it is the responsibility of the data owner, not the service provider, to secure valuable data.

Government policy will need to change in response to both the opportunity and the threats that cloud computing brings. This will likely focus on the off-shoring of personal data and protection of privacy, whether it is data being controlled by a third party or off-shored to another country. There will be a corresponding drop in security as the traditional controls such as VLANs (virtual local-area networks) and firewalls prove less effective during the transition to a virtualized environment. Security managers will need to pay particular attention to systems that contain critical data such as corporate financial information or source code during the transition to server virtualization in production environments.

Outsourcing means losing significant control over data, and while this isn't a good idea from a security perspective, the business ease and financial savings will continue to increase the usage of these services. Security managers will need to work with their company's legal staff to ensure that appropriate contract terms are in place to protect corporate data and provide for acceptable service-level agreements.

Cloud-based services will result in many mobile IT users accessing business data and services without traversing the corporate network. This will increase the need for enterprises to place security controls between mobile users and cloud-based services. Placing large amounts of sensitive data in a globally accessible cloud leaves organizations open to large distributed threats-attackers no longer have to come onto the premises to steal data, and they can find it all in the one "virtual" location.

Virtualization efficiencies in the cloud require virtual machines from multiple organizations to be co-located on the same physical resources. Although traditional data center security still applies in the cloud environment, physical segregation and hardware-based security cannot protect against attacks between virtual machines on the same server. Administrative access is through the Internet rather than the controlled and restricted direct or on-premises connection that is adhered to in the traditional data center model. This increases risk and exposure and will require stringent monitoring for changes in system control and access control restriction.

The dynamic and fluid nature of virtual machines will make it difficult to maintain the consistency of security and ensure that records can be audited. The ease of cloning and distribution between physical servers could result in the propagation of configuration errors and other vulnerabilities. Proving the security state of a system and identifying the location of an insecure virtual machine will be challenging. Regardless of the location of the virtual machine within the virtual environment, the intrusion detection and prevention systems will need to be able to detect malicious activity at virtual machine level. The co-location of multiple virtual machines increases the attack surface and risk of virtual machine-to-virtual machine compromise.

Localized virtual machines and physical servers use the same operating systems as well as enterprise and web applications in a cloud server environment, increasing the threat of an attacker or malware exploiting vulnerabilities in these systems and applications remotely. Virtual machines are vulnerable as they move between the private cloud and the public cloud. A fully or partially shared cloud environment is expected to have a greater attack surface and therefore can be considered to be at greater risk than a dedicated resources environment.

Operating system and application files are on a shared physical infrastructure in a virtualized cloud environment and require system, file, and activity monitoring to provide confidence and auditable proof to enterprise customers that their resources have not been compromised or tampered with. In the cloud computing environment, the enterprise subscribes to cloud computing resources, and the responsibility for patching is the subscriber's rather than the cloud computing vendor's. The need for patch maintenance vigilance is imperative. Lack of due diligence in this regard could rapidly make the task unmanageable or impossible, leaving you with "virtual patching" as the only alternative.

Enterprises are often required to prove that their security compliance is in accord with regulations, standards, and auditing practices, regardless of the location of the systems at which the data resides. Data is fluid in cloud computing and may reside in on-premises physical servers, on-premises virtual machines, or off-premises virtual machines running on cloud computing resources, and this will require some rethinking on the part of auditors and practitioners alike.

In the rush to take advantage of the benefits of cloud computing, not least of which is significant cost savings, many corporations are likely rushing into cloud computing without a serious consideration of the security implications. To establish zones of trust in the cloud, the virtual machines must be self-defending, effectively moving the perimeter to the virtual machine itself. Enterprise perimeter security (i.e., firewalls, demilitarized zones [DMZs], network segmentation, intrusion detection and prevention systems [IDS/IPS], monitoring tools, and the associated security policies) only controls the data that resides and transits behind the perimeter. In the cloud computing world, the cloud computing provider is in charge of customer data security and privacy.

Sunday, March 7, 2010

3G-3rd Generation

3G refers to the third generation of mobile telephony (that is, cellular) technology. The third generation, as the name suggests, follows two earlier generations.


The first generation (1G) began in the early 80's with commercial deployment of Advanced Mobile Phone Service (AMPS) cellular networks. Early AMPS networks used Frequency Division Multiplexing Access (FDMA) to carry analog voice over channels in the 800 MHz frequency band.



The second generation (2G) emerged in the 90's when mobile operators deployed two competing digital voice standards. In North America, some operators adopted IS-95, which used Code Division Multiple Access (CDMA) to multiplex up to 64 calls per channel in the 800 MHz band. Across the world, many operators adopted the Global System for Mobile communication (GSM) standard, which used Time Division Multiple Access (TDMA) to multiplex up to 8 calls per channel in the 900 and 1800 MHz bands.



The International Telecommunications Union (ITU) defined the third generation (3G) of mobile telephony standards – IMT-2000 – to facilitate growth, increase bandwidth, and support more diverse applications. For example, GSM could deliver not only voice, but also circuit-switched data at speeds up to 14.4 Kbps. But to support mobile multimedia applications, 3G had to deliver packet-switched data with better spectral efficiency, at far greater speeds.



However, to get from 2G to 3G, mobile operators had make "evolutionary" upgrades to existing networks while simultaneously planning their "revolutionary" new mobile broadband networks. This lead to the establishment of two distinct 3G families: 3GPP and 3GPP2.



The 3rd Generation Partnership Project (3GPP) was formed in 1998 to foster deployment of 3G networks that descended from GSM. 3GPP technologies evolved as follows.



• General Packet Radio Service (GPRS) offered speeds up to 114 Kbps.


• Enhanced Data Rates for Global Evolution (EDGE) reached up to 384 Kbps.


• UMTS Wideband CDMA (WCDMA) offered downlink speeds up to 1.92 Mbps.


• High Speed Downlink Packet Access (HSDPA) boosted the downlink to 14Mbps.


• LTE Evolved UMTS Terrestrial Radio Access (E-UTRA) is aiming for 100 Mbps.



GPRS deployments began in 2000, followed by EDGE in 2003. While these technologies are defined by IMT-2000, they are sometimes called "2.5G" because they did not offer multi-megabit data rates. EDGE has now been superceded by HSDPA (and its uplink partner HSUPA). According to the 3GPP, there were 166 HSDPA networks in 75 countries at the end of 2007. The next step for GSM operators: LTE E-UTRA, based on specifications completed in late 2008.



A second organization – the 3rd Generation Partnership Project 2 (3GPP2) -- was formed to help North American and Asian operators using CDMA2000 transition to 3G. 3GPP2 technologies evolved as follows.



• One Times Radio Transmission Technology (1xRTT) offered speeds up to 144 Kbps.


• Evolution – Data Optimized (EV-DO) increased downlink speeds up to 2.4 Mbps.


• EV-DO Rev. A boosted downlink peak speed to 3.1 Mbps and reduced latency.


• EV-DO Rev. B can use 2 to 15 channels, with each downlink peaking at 4.9 Mbps.


• Ultra Mobile Broadband (UMB) was slated to reach 288 Mbps on the downlink.



1xRTT became available in 2002, followed by commercial EV-DO Rev. 0 in 2004. Here again, 1xRTT is referred to as "2.5G" because it served as a transitional step to EV-DO. EV-DO standards were extended twice – Revision A services emerged in 2006 and are now being succeeded by products that use Revision B to increase data rates by transmitting over multiple channels. The 3GPP2's next-generation technology, UMB, may not catch on, as many CDMA operators are now planning to evolve to LTE instead.



In fact, LTE and UMB are often called 4G (fourth generation) technologies because they increase downlink speeds an order of magnitude. This label is a bit premature because what constitutes "4G" has not yet been standardized. The ITU is currently considering candidate technologies for inclusion in the 4G IMT-Advanced standard, including LTE, UMB, and WiMAX II. Goals for 4G include data rates of least 100 Mbps, use of OFDMA transmission, and packet-switched delivery of IP-based voice, data, and streaming multimedia.

Saturday, March 6, 2010

Motorola Milestone


General 2G Network GSM 850 / 900 / 1800 / 1900
3G Network HSDPA 900 / 2100
Announced 2009, November
Status Available. Released 2009, November
Size Dimensions 115.8 x 60 x 13.7 mm
Weight 165 g
Display Type TFT capacitive touchscreen, 16M colors
Size 480 x 854 pixels, 3.7 inches
- Multi-touch input method
- Accelerometer sensor
- Proximity sensor for auto turn-off
- Full QWERTY keyboard with 5-way navigation key
Sound Alert types Vibration; MP3, WAV ringtones
Speakerphone Yes, with stereo speakers
- 3.5 mm audio jack
Memory Phonebook Practically unlimited entries and fields, Photo call
Call records Practically unlimited
Internal 133 MB storage, 256 MB RAM
Card slot microSD, up to 32GB, 8GB included, buy memory
Data GPRS Class 12 (4+1/3+2/2+3/1+4 slots), 32 - 48 kbps
EDGE Class 12
3G HSDPA, 10.2 Mbps; HSUPA, 5.76 Mbps
WLAN Wi-Fi 802.11 b/g
Bluetooth Yes, v2.1 with A2DP
Infrared port No
USB Yes, microUSB v2.0
Camera Primary 5 MP, 2592 x 1944 pixels, autofocus, dual-LED flash
Features Geo-tagging
Video Yes, D1 (720x480 pixels)@24fps
Secondary No
Features OS Android OS, v2 (Eclair)
CPU ARM Cortex A8 600 MHz processor
Messaging SMS (threaded view), MMS, Email, IM, Push Email
Browser HTML
Radio No
Games Downloadable
Colors Black
GPS Yes, with A-GPS support, Motonav software
Java
- Digital compass
- MP3/eAAC+/WAV/WMA9 player
- MP4/H.263/H.264/WMV9 player
- Google Search, Maps, Gmail,
- YouTube, Google Talk
- Adobe Flash Player v10.1
- Document viewer
- Photo viewer/editor
- Organizer
- Voice memo/dial
- T9
Battery Standard battery, Li-Ion 1400 mAh (BP6X)
Stand-by Up to 350 h
Talk time Up to 6 h 30 min
Misc SAR US 1.49 W/kg (head) 1.50 W/kg (body)

For a phone that seemed to cause such a stir in the US when it launched last year, the Motorola Milestone (called the Droid in the US) has barely raised a ripple this side of the pond. No network has signed up for the device – in fact, only Orange lists Motorola handsets at all in the UK – and while enthusiasts snapped up the first batch from online retailer Expansys before Christmas, it has all gone very quiet since then.

It's easy to see why Motorola might now be feeling a little bit sheepish about its much vaunted iPhone killer. There is a new kid on the block: Google's Nexus One, which sports an updated version of the Android operating system that the Milestone contains, a better screen and a sexier look.

It's also easy to see why Google has got fed up with mobile phone manufacturers putting its increasingly elegant Android software into a bunch of ugly bricks and decided that it needed to be in complete control of its own handset in order to stop the iPhone stealing the smartphone show. From the uninspiring T-Mobile Pulse and the chunky Motorola Dext to the HTC Hero, with its weird "chin", and the temperamental Samsung Galaxy i7500, Android devices have hardly been trend setters.

The Motorola Milestone continues disappointingly in that vein. It is a similar size to the iPhone, though slightly heavier and when placed on its side so that the qwerty keyboard slides out – in an admittedly reassuringly solid manner because the build quality is excellent – it juts out past the screen on the right-hand side. This makes using the keyboard rather awkward as it is off-centre. The screen on the Milestone is inferior to the active-matrix organic LED (AMOLED) touchscreen on the Nexus One, which certainly dazzled our reviewer Bobbie Johnson .

But the Milestone does include multitouch, unlike the Nexus One, Dext and its US variant the Droid. Like all Android devices, however, the Milestone is still waiting for developers to start creating the sort of applications – not least games – that really bring multitouch to life. For an example of what multitouch can become, look no further than the game Eliss being played on an iPhone.

The Milestone is far more responsive than the Motorola Dext – which in my experience suffers from dreadful lag – in part because Motorola's first stab at an Android handset was running version 1.5 of the software as opposed to the Milestone's Android 2.0. The Nexus One, meanwhile, is on Android 2.1. But the Milestone actually represents something of a step backwards for Motorola.

The Dext – sold as the Cliq in the US – included Motoblur, which brought social networking updates direct to the device's homescreen rather like Vodafone's 360 service. But Motoblur is conspicuously absent from the new device.

All the usual Android features are, however, present: email integration is easy, setting up contacts and downloading what applications there are from the Android marketplace is simple. The Milestone also has a better camera than the iPhone – weighing in at 5 megapixels and including a similar variety of bells and whistles, such as flash and a digital zoom, to those included on the Nexus One – but I found it incredibly slow to process images. The Milestone can take a 32GB MicroSD card, the same as the Nexus One. Both the Nexus One and Milestone, meanwhile, allow for multitasking, meaning you can flit between applications without having to close them down, which the iPhone has yet to achieve.

The ultimate question with the Milestone is why bother to buy it when the Nexus One is a better phone? Yes it has a keypad, but anyone who desperately needs a keyboard should just buy a BlackBerry – RIM is the only handset manufacturer that can be trusted to produce one that will not end up inducing carpal tunnel syndrome in long-term users. The Milestone's off-centre keyboard will cripple you in a matter of weeks.

The big drawback with the Nexus One is it is currently only available direct from Google. This makes it expensive – at about £425 – as there is no network operator to subsidise it and leaves any customer who has problems with the device with no other option than emailing Google and waiting for a response. That, however, is going to change later this year as Vodafone, and possibly T-Mobile, will sell the Nexus One in the UK later this year. Anyone desperate for an Android phone would do well to wait; treating this latest Motorola attempt as a Milestone on the road to something better.

Pros: It's not an iPhone – for those that cannot bear the thought of becoming "one of those people that has an iPhone".

Cons: It's not a Nexus One

Rain Sensor

The System
Rain Sensor is a highly versatile device for automatic wiping of vehicle windscreen when it is wet due to moisture, raindrops or even mud. It works by reflecting harmonious light beams within the windscreen. When raindrops fall onto the windscreen, the harmony is disturbed - creating a drop in the light beam intensity. The system then activates the wiper to operate in full automatic mode.

Features•Automatic wiper activation and deactivation
•Intelligent wipers speed control
•Auto tuning for all windscreens
•Manual override function
Benefits•Safety : The 10 milliseconds response time allows immediate activation of wiper when sudden splashes of water (due to puddles / potholes) that totally "blind" the driver. Without the Rain Sensor the driver would be at risk of losing control of vehicle.
•Comfort: The driver may be subjected to many distractions like; heavy traffic, bad weather, dangerous road conditions and fatigue. The Rain Sensor reduces the driver's distraction, making driving more comfortable and relaxing. Trailing a vehicle in wet conditions is no longer a nuisance as detection of even 0.005 milliliters of water will activate the wiper.

T-Mobile

T-Mobile G1



The T-Mobile G1 combines full touch-screen functionality and a QWERTY keyboard with a mobile Web experience that includes the popular Google products that millions have enjoyed on the desktop, including Google Maps Street View, Gmail, YouTube and others.
With a fun and intuitive user interface and one-touch access to Google Search, the T-Mobile G1 is also the first phone to provide access to Android Market, where customers can find and download unique applications to expand and personalize their phone to fit their lifestyle.

Delivering the Familiarity of Google for a Superior Mobile Internet Experience:
The T-Mobile G1 with Google delivers a premium, easy-to-use mobile Web and communications experience in one device. Working together, T-Mobile, Google and HTC integrated Android and T-Mobile services into the phone’s form and function. The T-Mobile G1’s vibrant, high-quality screen slides open to reveal a full QWERTY keyboard, great for communicating with friends online or using the phone’s e-mail, IM and mobile messaging capabilities. As another option for accessing the device, the T-Mobile G1 comes equipped with a convenient trackball for more precise, one-handed navigation.
With one-click contextual search, T-Mobile G1 customers in a flash can search for relevant information with a touch of a finger. A full HTML Web browser allows users to see any Web page the way it was designed to be seen, and then easily zoom in to expand any section by simply tapping on the screen. With built-in support for T-Mobile’s 3G and EDGE network as well as Wi-Fi, the T-Mobile G1 can connect to the best available high-speed data connection for surfing the Web and downloading information quickly and effortlessly.

Google Maps Street View:
With Google Maps, Google’s groundbreaking maps service, T-Mobile G1 users can instantly view maps and satellite imagery, as well as find local business and get driving directions, all from the phone’s easy-to-use touch interface. The T-Mobile G1 also includes Google Maps Street View, allowing customers to explore cities at street-level virtually while on the go. Without taking a step, customers can tour a far-away place as if they were there — standing on the street corner. Even better, the Google Maps feature syncs with the built-in compass on the phone — an industry first — to allow users to view locations and navigate 360 degrees by simply moving the phone with their hand. Google Maps Street View is available today in many U.S. locations and soon in European countries.

Communicating on the Go:
The T-Mobile G1 features a rich HTML e-mail client, which seamlessly syncs your e-mail, calendar and contacts from Gmail as well as most other POP3 or IMAP e-mail services. The T-Mobile G1 multitasks, so you can read a Web page while also downloading your e-mail in the background. It combines Instant Messaging support for Google Talk™, as well as AOL®, Yahoo! Messenger ® and Windows Live Messenger in the U.S. With access to high-speed Web browsing and a 3-megapixel camera with photo-sharing capabilities, the T-Mobile G1 is ideal for balancing a busy lifestyle, whether sharing pictures, checking the latest sports scores or accessing social networking sites.

Embracing User-Generated Content:
Customers can use the T-Mobile G1's 3G and Wi-Fi connection to attach and share pictures over email and MMS or download music from their favorite Web sites, and soon, upload and post pictures to their personal blog. Built-in support for YouTube allows customers to enjoy YouTube's originally-created content, easily navigate through YouTube's familiar video browsing categories or search for specific videos.

Music at Your Fingertips:

The T-Mobile G1 comes pre-loaded with a new application developed by Amazon.com that gives customers easy access to Amazon MP3, Amazon.com’s digital music download store with more than 6 million DRM-free MP3 tracks. Using the new application, T-Mobile G1 customers are able to search, sample, purchase and download music from Amazon MP3 directly to their device (downloading music from Amazon MP3 using the T-Mobile G1 requires a Wi-Fi connection; searching, sampling and purchasing music can be done anywhere with a cellular connection). The T-Mobile G1 will be the first device with the Amazon MP3 mobile application pre-loaded.

Android Market:
The T-Mobile G1 is the first phone to offer access to Android Market, which hosts unique applications and mash ups of existing and new services from developers around the world. With just a couple of short clicks, customers can find and download a wide range of innovative software applications — from games to social networking and on-the-go shopping — to personalize their phone and enhance their mobile lifestyle. When the phone launches next month, dozens of unique, first-of-a-kind Android applications will be available for download on Android Market, including:

•ShopSavvy:
an application designed to help people do comparative shopping. Users scan the UPC code of a product with their phone’s camera while they are shopping, and can instantly compare prices from online merchants and nearby local stores.

•Ecorio:
a new application developed to help people keep track of their daily travels and view what their carbon footprint looks like. With access to tips and tricks, Ecorio allows users to record the steps they take throughout their day to help offset their impact on the environment.

•BreadCrumbz:
a new application that enables people to create a step-by-step visual map using photos. Customers can create their own routes, share them with friends or with the world.

Friday, March 5, 2010

History of animation

1824: Peter Roget presented his paper 'The persistence of vision with regard to moving objects' to the British Royal Society.
1831: Dr. Joseph Antoine Plateau (a Belgian scientist) and Dr. Simon Rittrer constructed a machine called a phenakitstoscope. This machine produced an illusion of movement by allowing a viewer to gaze at a rotating disk containing small windows; behind the windows was another disk containing a sequence of images. When the disks were rotated at the correct speed, the synchronization of the windows with the images created an animated effect.
1872: Eadweard Muybridge started his photographic gathering of animals in motion.
1887: Thomas Edison started his research work into motion pictures.
1889: Thomas Edison announced his creation of the kinetoscope which projected a 50ft length of film in approximately 13 seconds.
1889: George Eastman began the manufacture of photographic film strips using a nitro-cellulose base.
1892: Emile Renynaud, combining his earlier invention of the praxinoscope with a projector, opens the Theatre Optique in the Musee Grevin. It displays an animation of images painted on long strips of celluloid.
1895: Louis and Augustine Lumiere issued a patent for a device called a cinematograph capable of projecting moving pictures.
1896: Thomas Armat designed the vitascope which projected the films of Thomas Edison. This machine had a major influence on all sub-sequent projectors.
1906: J. Stuart Blackton made the first animated film which he called "Humorous phases of funny faces." His method was to draw comical faces on a blackboard and film them. He would stop the film, erase one face to draw another, and then film the newly drawn face. The Ôstop-motionÕ provided a starting effect as the facial expressions changed be fore the viewerÕs eyes.
1908: In France Emile Cohl produced a film, Phantasmagorie which was the first depicting white figures on a black background.
1910: Emile Cohl makes En Route the first paper cutout animation. This technique saves time by not having to redraw each new cell, only reposition the paper.
1911: Winsor McCay produced an animation sequence using his comic strip character "Little Nemo."
1913: J.R. Bray devised "Colonel Heeza Liar," and Sidney Smith created "Old Doc Yak."
1914: John R Bray applies for a patent on numerous techniques for animation. One of the most revolutionary being the process of printing the backgrounds of the animation.
1914: Winsor McCay produced a cartoon called "Gertie, The Trained Dinosaur" which amazingly consisted of 10,000 drawings.
1914: Earl Hurd applies for a patent for the technique of drawing the animated portion of an animation on a clear celluloid sheet and later photographing it with its matching background. [Cel animation]
1917: The International Feature Syndicate released many titles including "Silk Hat Harry","Bringing Up Father", and "Krazy Kat".
1919: Pat Sullivan created an American cartoon "Felix the Cat."
1926: The first feature-length animated film called "El Apostol" is created in Argentina.
1923: Walt and Roy Disney found Disney Brothers Cartoon Studio.
1923: Walt Disney extended Max Fleischer's technique of combining live action with cartoon characters in the film "Alice's Wonderland".
1927: Warner Brothers released "The Jazz Singer" which introduced combined sound and images.
1928: Walt Disney created the first cartoon with synchronized sound called "Steam Boat Willy".
1930: The King of Jazz is produced by Universal. In it is a short animated sequence done by Walter Lantz. It is the first animation done with the two strip technicolor process
1934: Urb Irwek creates a multi-plane camera. This camera is capable of filming several separate layers of cels giving the final frame a truly three dimensional look.
1943: John and James Whitney produced "Five Abstract Film Exercises."
1945: Harry Smith produced animation by drawing directly onto film.
1957: John Whitney used 17 Bodine motors, 8 Selsyns, 9 different gear units and 5 ball integrators to create analog computer graphics.
1961: John Whitney used differential gear mechanisms to create film and television title sequences.
1963: Ivan Sutherland and SKETCHPAD at MIT/Lincoln Labs
1964: Ken Knowlton, working at Bell Laboratories, started developing computer techniques for producing animated movies.
1972: University of Utah, Ed Catmull develops an animation scripting language and creates an animation of a smooth shaded hand. Ref: E. Catmull, "A System for Computer Generated Movies", Proceedings of the ACM National Conference, 1972. (In the SIGGRAPH 98 Seminal Graphics collection.)
1972: University of Utah, Fred Parke creates first computer generated facial animation. >Ref: F. Parke, "Computer Generated Animation of Faces", Proceedings of the ACM National Conference, 1972. (In the SIGGRAPH 98 Seminal Graphics collection.)
1974: National Research Council of Canada releases Hunger/La Faim directed by Peter Foldes and featuring Burtnyk and Wein interactive keyframing techniques. Ref: N. Burtnyk and M. Wein, "Interactive Skeleton Techniques for Enhancing Motion Dynamics in Key Frame Animation", Communications of the ACM, 19(10), October 1976. (In the SIGGRAPH 98 Seminal Graphics collection.)
1982: Tron, MAGI, movie with CG premise
1983: Bill Reeves at Lucasfilm publishes techniques for modeling particle systems. "Demo" is Star Trek II: The Wrath of Kahn. The paper also promotes motion blur. Ref: W. Reeves, "Particle Systems -- A Technique for Modeling a Class of Fuzzy Objects", Computer Graphics, 17(3), July 1983. (In the SIGGRAPH 98 Seminal Graphics collection.)
1984: The Last Starfighter, CG is used in place of models
1984: Porter and Duff at Lucusfilm publish paper on digital compositing using an alpha channel. Ref: T. Porter and T. Duff, "Compositing Digital Images", Computer Graphics, 18(3), July 1984. (In the SIGGRAPH 98 Seminal Graphics collection.)
1985: Girard and Maciejewski at OSU publish a paper describing the use of inverse kinematics and dynamics for animation. Their techniques are used in the animation "Eurythmy." Ref: M. Girard and A. A. Maciejewski, "Computational Modeling for the Computer Animation of Legged Figures", Computer Graphics, 19(3), July 1985. (In the SIGGRAPH 98 Seminal Graphics collection.)
1985: Ken Perlin at NYU publishes a paper on noise functions for textures. He later applied this technique to add realism to character animations. Ref: K. Perlin, "An Image Synthesizer", Computer Graphics, 19(3), July 1985. (In the SIGGRAPH 98 Seminal Graphics collection.)
1987: John Lasseter at Pixar publishes a paper describing traditional animation principles. "Demos" are Andre and Wally B and Luxo Jr. Ref: J. Lasseter, "Principles of Traditional Animation Applied to 3D Computer Animation", Computer Graphics, 21(4), July 1987. (In the SIGGRAPH 98 Seminal Graphics collection.)
1987: Craig Reynolds then at Symbolics (now at Dreamworks SKG) publishes a paper on self-organizing behavior for groups. "Demos" are Stanley and Stella and Batman Returns. Ref: C. W. Reynolds, "Flocks, Herds, and Schools: A Distributed Behavioral Model", Computer Graphics, 21(4), July 1987. (In the SIGGRAPH 98 Seminal Graphics collection.)
1988: Willow uses morphing in live action film
1992: Beier and Neely, at SGI and PDI respectively publish an algorithm where line correspondences guide morphing between 2D images. "Demo" is Michael Jackson video Black and White. Ref: T. Beier and S. Neely, "Feature-Based Image Metamorphosis", Computer Graphics, 26(2), July 1992. (In the SIGGRAPH 98 Seminal Graphics collection.) v
1993: Chen and Williams at Apple publish a paper on view interpolation for 3D walkthroughs. Ref: S. E. Chen and L. Williams, "View Interpolation for Image Synthesis", Computer Graphics Proceedings, Annual Conference Series, 1993. (In the SIGGRAPH 98 Seminal Graphics collection.)
1993: Jurassic Park use of CG for realistic living creatures
1995: Toy Story first full-length 3D CG feature film