Sep 29, 2011

How to Implement Voice Recognize in Android

I’ve been searching for a simple tutorial on using voice recognition in Android but haven’t had much luck. The Google official documentation provide an example of the activity, but don’t speculate anything more than that, so you’re kind of on your own a little.

Luckily I’ve gone through some of that pain, and should make this easy for you, post up a comment below if you think I can improve this.

I’d suggest that you create a blank project for this, get the basics, then think about merging VR into your existing applications. I’d also suggest that you copy the code below exactly as it appears, once you have that working you can begin to tweak it.

With your blank project setup, you should have an AndroidManifest file like the following



manifest.xml

---------------


xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.voice.recog"
android:version
Code="1"
android:versionName="1.0">
<application android:label="VoiceRecognitionDemo" android:icon="@drawable/icon"
android:debuggable="true">
<activity android:name=".Main"
android:label="@string/app_name">
<intent-filter
>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.LAUNCHER" />
intent-filter>
activity>
application>
manifest>


And have the following in res/layout/voice_recog.xml :
xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="
http://schemas.android.com/apk/res/android"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
android:orientation="vertical">

<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:paddingBottom="4dip"
android:text="Click the button and
start speaking" />

<Button android:id="@+id/speakButton"
android:layout_width="fill_parent"
android:onClick=
"speakButtonClicked"
android:layout_height="wrap_content"
android:text="Click Me!" />

<ListView android:id="@+id/list"
android:layout_width="fill_parent"
android:layout_height="0dip"
android:layout_weight="1" />

LinearLayout>

And finally, this in your res/layout/main.xml :

xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="VoiceRecognition Demo!"
/>
LinearLayout>

So thats your layout and configuration sorted. It will provide us with a button to start the voice recognition, and a list to present any words which the voice recognition service thought it heard. Lets now step through the actual activity and see how this works.

You should copy this into your activity :


package com.voice.re
cog;

import android.app.Activity;
import android.os.Bundle;
import android.content.Intent;
import android.content.pm.PackageManager;
import android.content.pm.ResolveInfo;
import android.speech.RecognizerIntent;
import android.view.View;
import android.widget.ArrayAdapter;
import android.widget.Button;
import android.widget.ListView;
import
java.util.ArrayList;
import java.util.List;

/**
* A very simple application to handle Voice Recognition intents
* and display the results
*/
public class Main extends Activity
{

private static final int REQUEST_CODE = 1234;
private ListView wordsList;

/**
* Called with the activity is first created.
*/

@Override
public void onCreate(Bundle savedInstanceState)
{
super.onCreate(savedInstanceState);
setContentView(R.layout.voice_recog);

Button speakButton = (Button) findViewById(R.id.speakButton);

wordsList = (ListView) findViewById(R.id.list);

// Disable button if no recognition service is present
PackageManager pm = getPackageManager();
List activities = pm.queryIn
tentActivities(
new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH), 0);
if (activities.size() == 0)
{
speakButton.setEnabled(false);
speakButton.setText("Recognizer not present");
}
}

/**
* Handle the action of the button being clicked

*/
public void speakButtonClicked(View v)
{
startVoiceRecognitionActivity();
}

/**
* Fire an intent to start the voi
ce recognition activity.
*/
private void startVoiceRecognitionActivity()
{
Intent intent = new Intent(RecognizerIntent.ACTION_RECOGNIZE_SPEECH);
intent.putExtra(RecognizerIntent.EXTRA_LANGUAGE_MODEL,
RecognizerIntent.LANGUAGE_MODEL_FREE_FORM);
intent.putExtra(RecognizerIntent.EXTRA_PROMPT, "Voice recognition Demo...");
startActivityForResult(intent, R
EQUEST_CODE);
}


/**
* Handle the results from the voice recognition activity.
*/
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
if (requestCode == REQUEST_CODE && resultCode == RESULT_OK)

{
// Populate the wordsList with the String values the recognition engine thought it heard
ArrayList matches = data.getStringArrayListExtra(
RecognizerIntent.EXTRA_RESULTS);
wordsList.setAdapter(new ArrayAdapter(this, android.R.layout.simple_list_item_1,
matches));
}
super.onActivityResult(requestCode, resultCode, data);
}
}

Breakdown of what the activity does :

Declares a request code, this is basically a checksum that we use to confirm the response when we call out to the voice recognition engine, this value could be anything you want. We also declare a ListView which will hold any words the recognition engine thought it heard.

The onCreate method does the usual initialisation when the activity is first created. This method also queries the packageManager to check if there are any packages installed that can handle intents for ACTION_RECOGNIZE_SPEECH. The reason we do this is to check we have a package installed that can do the translation, and if n

ot we will disable the button.

The speakButtonClicked is bound to the button, so this method is invoked when the button is clicked.

The startVoiceRecognitionActivity invokes an activity that can handle the voice recognition, setting the language mode to free form (as opposed to web form)

The onActivityResult is the callback from the above invocation, it first checks to see that the request code matches the one that was passed in, and ensures that the result is OK and not an error.

Next, the results are pulled out of the intent and set into the ListView to be displayed on the screen.

Notes on debugging :

You won’t have a great deal of luck running this on the emulator, there may be ways of using a PC microphone to direct audio input into the emulator, but that doesn’t sound like a trivial task. Your best bet is to generate the APK and transfer it over to your device (I’m running an Orange San Francisco / ZTE Blade).

If you experience any crashing errors (I did), then connect your device via USB, enable debugging, and then run the following command from a console window

1
/platform-tools/adb -d logcat
What this does, is invoke the android device bridge, th

e -d switch specifies to run this against the physical device, and the logcat tells the ADB to print out any device logging to the console.

This means anything you do on the device will be logged into your console window, it helped me find a few null pointer issues.

That’s pretty much it, its quite a simple activity. Have a play with it and if you have any comments please let me know. You may have mixed results with what the recognition thinks you have said, I’ve had some odd surprises, let me know!

Happy coding!

Sep 28, 2011

Gesture detection in Android, part 2 of 2 ( Character Dector )

From Android 1.6 onwards includes a new package android.gesture which is used to for complex gesture recognition. This package includes APIs to store, load, draw and recognize gestures. We can define our own pre-defined patterns in our application and store these gestures in a file and later on use this file to recognize the gesture.

Gestures Builder application

There is a handy sample application, Gestures Builder, which comes with the Android 1.6 and higher. This application is pre-installed in 1.6 and higher emulators. Here is a screenshot of the application:




Gesture Builder Sample application
Using this application we can create our gesture library and save it to SD card. Once the file is created we can include this file in our application in /res/raw folder.

Loading a gesture library

To load the gesture file, we use the class GestureLibraries class. This class has functions to load from resource, SD card file or private file. GestureLibraries class has following methods:

static GestureLibrary fromFile(String path)
static GestureLibrary fromFile(File path)
static GestureLibrary fromPrivateFile(Context context, String name)
static GestureLibrary fromRawResource(Context context, int resourceId)
All these methods return a class GestureLibrary. This class is used to read gestures entries from file, save gestures entries to file, recognize the gestures, etc. Once the GestureLibraries return a GestureLibrary class that corresponds to the file specified, we read all the gesture entries using GestureLibrary.load method.

Drawing and recognize a gesture

To draw and recognize gestures, we use the class GestureOverlayView. This view extends the FrameLayout, i.e. we can use it inside any other layout or use it as a parent layout to include other child views. This view acts as an overlay view and the user can draw gestures on it. This view uses three callback interfaces to report the actions performed, they are:

interface GestureOverlayView.OnGestureListener
interface GestureOverlayView.OnGesturePerformedListener
interface GestureOverlayView.OnGesturingListener
The GestureOverlayView.OnGestureListener callback interface is used to handle the gesture operations in low-level. This interface has following methods:

void onGestureStarted(GestureOverlayView overlay, MotionEvent event)
void onGesture(GestureOverlayView overlay, MotionEvent event)
void onGestureEnded(GestureOverlayView overlay, MotionEvent event)
void onGestureCancelled(GestureOverlayView overlay, MotionEvent event)

All these methods have two parameters GestureOverlayView and MotionEvent and represent the overlay view and the event that occurred.

The GestureOverlayView.OnGesturingListener callback interface is used to find when the gesture is started and ended. The interface has following methods:

void onGesturingStarted(GestureOverlayView overlay)
void onGesturingEnded(GestureOverlayView overlay)
The onGestuingStarted will be called when gesture action is started and onGesturingEnded will be called when the gesture action ended. Both these methods contain the GestureOverlayView that is used.

Most important is the GestureOverlayView.OnGesturePerformedListener interface. This interface has only one method:

void onGesturePerformed(GestureOverlayView overlay, Gesture gesture)
This method is called when the user performed the gesture and is processed by the GestureOverlayView. The first parameter is the overlay view that is used and the second is a class Gesture that represents the user performed gesture. Gesture class represents a hand drawn shape. This representation has one or more strokes; each stroke is a series of points. The GestureLibrary class uses this class to recognize gestures.

To recognize a gesture, we use GestureLibrary.recognize method. This method accepts a Gesture class. This method recognizes the gestures using internal recognizers and returns a list of predictions. The prediction is represented by Prediction class and contains two member variables name and score. The name variable represents the name of the gesture and score variable represents the score given by the gesture recognizer. This score member is used to choose the best matching prediction from the list. One of the common methods is to choose the first value that has score greater that one. Another method used is to choose the value that is inside a minimum and maximum threshold limit. Choosing this threshold limit is entirely depends on the implementation ranging from simple limits based on trial and error methods and more complex methods that may include leaning the user inputs and improve the recognition based on that.

Sample application

The sample application that accompanies this article includes 5 pre-defined gestures A, B, C, D and E. When the user draws these patterns the application lists the names and scores of all the predictions.

Example Loding Predifined Gesture From Raw folder :
----------------------------------------------------

GestureLibrary gesturesLibrery=GestureLibraries.fromRawResource(this, R.raw.gestures_name);

setting listener to Gesture :
-----------------------------


gesture.addOnGestureListener(this);


Checking Gesture :
==================


if (gesture.getGesture() != null) {
String str1 = recGesture(gesture.getGesture());
if(st1.equals("-1))
print gesture sucess
else

print gesture not foud
}


recognizing Gesture using Prediction Class :
---------------------------------------------
private String recGesture(Gesture gesture) {
ArrayList alPredictions = mGesLib.recognize(gesture);
if (alPredictions.size() > 0) {
Prediction pRecGes = alPredictions.get(0);
Log.e("recGesture true", pRecGes.name + alPredictions.toString());

// Toast.makeText(getApplicationContext(),pRecGes.name,Toast.LENGTH_SHORT).show();
return pRecGes.name;
} else {
Log.e("recGesture", "did not find");
// Toast.makeText(getApplicationContext(),"did not find"+alPredictions.size(),Toast.LENGTH_SHORT).show();
return "-1";
}
}


For Source Check here :


code from : krvarma

Hope this article helps you to understand complex gesture recognition in Android.

Gesture detection in Android, part 1 of 2

Gesture detection is one of the great features of all touch based mobile devices. Gestures are patterns drawn by the user on the screen. Simple gestures include tap, scroll, swipe, etc. Complex gestures are more complex patterns drawn on the screen. In Android we can detect simple gestures using GestureDetector class and complex gestures using GestureOverlayView class.

In part 1 of this 2 part article series, I will try to explain the simple gesture detection using GestureDetector class and in next part I will try to explain complex gesture detection using GestureOverlayView class.

GestureDetector is a class which is used to detect simple gestures like tap, scroll, swipe or fling, etc. This class detects gestures using the supplied MotionEvent class. We use this class along with the onTouchEvent, inside this method we call the GestureDetector.onTouchEvent. GestureDetector identify the gestures or events that occurred and report back to us using GestureDetector.OnGestureListener callback interface. We create an instance of the GestureDetector class by passing Context and GestureDetector.OnGestureListener listener.

The GestureDetector.OnGestureListener interface has following abstract methods:

abstract boolean onDown(MotionEvent e)
abstract void onLongPress(MotionEvent e)
abstract boolean onScroll(MotionEvent e1, MotionEvent e2, float distanceX, float distanceY)
abstract void onShowPress(MotionEvent e)
abstract boolean onSingleTapUp(MotionEvent e)
abstract boolean onFling(MotionEvent e1, MotionEvent e2, float velocityX, float velocityY)
The onDown method is called when the user first touch the screen, the MotionEvent parameter represents the event that corresponds to the touch event.

The onLongPress method is called when user touches the screen and holds it for a period of time. The MotionEvent parameter represents the event that corresponds to the touch event.

The onScroll method is called when the user touches the screen and moves to another location on the screen. This method has 4 parameters; first MotionEvent corresponds to the first touch event that occurred, second MotionEvent corresponds to the scroll that occurred, the distanceX parameter represents the scrolled distance along the X axis since the last call to onScroll and the forth parameter distanceY is the distance occurred along Y axis since the last call to onScroll. The third and forth parameters are little bit confusing and are not the distance between MotionEvent 1 and MotionEvent 2.

The onShowPress method is called when the user touches the phone and not moved yet. This event is mostly used for giving visual feedback to the user to show their action.

The onSingleTapUp method is called when a tap occurred, i.e. use taps the screen.

The onFling method is called whenever the user swipes the screen in any direction, i.e. the user touches the screen and immediately moves the finger in any direction. The first parameter is the MotionEvent corresponds to the touch event that started the fling, second parameter is the MotionEvent that corresponds to the movement that triggered the fling, the third one corresponds to the velocity along X axis measured and the forth one corresponds to the velocity along Y axis measured. The use of this gesture depends on application to application. Some application starts the movement of some objects in the screens with velocity based on the X and Y velocity measured and gradually slows down the movement and settled the objects somewhere on the screen. Another use of this method is to move from one page to another within the application.

Double-tap

You should be noticed that the double-tap event is not present in the GestureDetector.onGestureListener callback interface. For some reason this event is reported using another callback interface GestureDetector.onDoubleTapListener. To use this callback interface we have to register for these events using GestureDetector.setOnDoubleTapListener passing the above listener. This interface has the following methods:

abstract boolean onDoubleTap(MotionEvent e)
abstract boolean onDoubleTapEvent(MotionEvent e)
abstract boolean onSingleTapConfirmed(MotionEvent e)
The onDoubleTap method is called when there is a double-tap event occurred. The only parameter MotionEvent corresponds to the double-tap event that occurred.

The onDoubleTapEvent is called for all events that occurred within the double-tap, i.e. down, move and up events.

The onSingleTapConfirmed method is called when there is a single tap occurred and confirmed, but this is not same as the single-tap event in the GestureDetector.onGestureListener. This is called when the GestureDetector detects and confirms that this tap does not lead to a double-tap.

The MotionEvent class

The MotionEvent class contains all the values correspond to a movement and touch event. This class holds values such as X and Y position at which the event occurred, timestamp at which the event occurred, mouse pointer index, etc. This class also contains the multi-touch information. Another interesting member is the pressure variable, which reports the pressure of the touch and movement events. I am experimenting with multi-touch and pressure and will be posting an article soon.

Example application

The example application accompanies this article is a simple one to show the use of these gestures. The application has 4 views and each view has different color. It has and 2 modes, SCROLL mode and FLIP mode. The application starts in FLIP mode. In this mode when you perform the swipe/fling gesture in left right, up and down direction, the view changes back and forth. When a long-press is detected, the application changes to SCROLL mode, in this mode you scroll the displayed view. While in this mode, you can double-tap on the screen to bring back the screen to its original position. Again when a long-press is detected the application changes to FLIP mode.

I hope this will give you an introduction to the simple gesture detection in Android. In the next part of this article I will try to explain complex gesture detection using GestureOverlayView class.

Happy gesture coding!

For Source Click here :

Code from : krvarma.com

Multi-touch in Android

The word “multitouch” gets thrown around quite a bit and it’s not always clear what people are referring to. For some it’s about hardware capability, for others it refers to specific gesture support in software. Whatever you decide to call it, today we’re going to look at how to make your apps and views behave nicely with multiple fingers on the screen.

This post is going to be heavy on code examples. It will cover creating a custom View that responds to touch events and allows the user to manipulate an object drawn within it. To get the most out of the examples you should be familiar with setting up an Activity and the basics of the Android UI system. Full project source will be linked at the end.

We’ll begin with a new View class that draws an object (our application icon) at a given position:

public class TouchExampleView extends View {
private Drawable mIcon;
private float mPosX;
private float mPosY;

private float mLastTouchX;
private float mLastTouchY;

public TouchExampleView(Context context) {
this(context, null, 0);
}

public TouchExampleView(Context context, AttributeSet attrs) {
this(context, attrs, 0);
}

public TouchExampleView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
mIcon = context.getResources().getDrawable(R.drawable.icon);
mIcon.setBounds(0, 0, mIcon.getIntrinsicWidth(), mIcon.getIntrinsicHeight());
}

@Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);

canvas.save();
canvas.translate(mPosX, mPosY);
mIcon.draw(canvas);
canvas.restore();
}

@Override
public boolean onTouchEvent(MotionEvent ev) {
// More to come here later...
return true;
}
}
MotionEvent

The Android framework’s primary point of access for touch data is the android.view.MotionEvent class. Passed to your views through the onTouchEvent and onInterceptTouchEvent methods, MotionEvent contains data about “pointers,” or active touch points on the device’s screen. Through a MotionEvent you can obtain X/Y coordinates as well as size and pressure for each pointer. MotionEvent.getAction() returns a value describing what kind of motion event occurred.

One of the more common uses of touch input is letting the user drag an object around the screen. We can accomplish this in our View class from above by implementing onTouchEvent as follows:

@Override
public boolean onTouchEvent(MotionEvent ev) {
final int action = ev.getAction();
switch (action) {
case MotionEvent.ACTION_DOWN: {
final float x = ev.getX();
final float y = ev.getY();

// Remember where we started
mLastTouchX = x;
mLastTouchY = y;
break;
}

case MotionEvent.ACTION_MOVE: {
final float x = ev.getX();
final float y = ev.getY();

// Calculate the distance moved
final float dx = x - mLastTouchX;
final float dy = y - mLastTouchY;

// Move the object
mPosX += dx;
mPosY += dy;

// Remember this touch position for the next move event
mLastTouchX = x;
mLastTouchY = y;

// Invalidate to request a redraw
invalidate();
break;
}
}

return true;
}
The code above has a bug on devices that support multiple pointers. While dragging the image around the screen, place a second finger on the touchscreen then lift the first finger. The image jumps! What’s happening? We’re calculating the distance to move the object based on the last known position of the default pointer. When the first finger is lifted, the second finger becomes the default pointer and we have a large delta between pointer positions which our code dutifully applies to the object’s location.

If all you want is info about a single pointer’s location, the methods MotionEvent.getX() and MotionEvent.getY() are all you need. MotionEvent was extended in Android 2.0 (Eclair) to report data about multiple pointers and new actions were added to describe multitouch events. MotionEvent.getPointerCount() returns the number of active pointers. getX and getY now accept an index to specify which pointer’s data to retrieve.

Index vs. ID

At a higher level, touchscreen data from a snapshot in time may not be immediately useful since touch gestures involve motion over time spanning many motion events. A pointer index does not necessarily match up across complex events, it only indicates the data’s position within the MotionEvent. However this is not work that your app has to do itself. Each pointer also has an ID mapping that stays persistent across touch events. You can retrieve this ID for each pointer using MotionEvent.getPointerId(index) and find an index for a pointer ID using MotionEvent.findPointerIndex(id).

Feeling Better?

Let’s fix the example above by taking pointer IDs into account.

private static final int INVALID_POINTER_ID = -1;

// The ‘active pointer’ is the one currently moving our object.
private int mActivePointerId = INVALID_POINTER_ID;

// Existing code ...

@Override
public boolean onTouchEvent(MotionEvent ev) {
final int action = ev.getAction();
switch (action & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN: {
final float x = ev.getX();
final float y = ev.getY();

mLastTouchX = x;
mLastTouchY = y;

// Save the ID of this pointer
mActivePointerId = ev.getPointerId(0);
break;
}

case MotionEvent.ACTION_MOVE: {
// Find the index of the active pointer and fetch its position
final int pointerIndex = ev.findPointerIndex(mActivePointerId);
final float x = ev.getX(pointerIndex);
final float y = ev.getY(pointerIndex);

final float dx = x - mLastTouchX;
final float dy = y - mLastTouchY;

mPosX += dx;
mPosY += dy;

mLastTouchX = x;
mLastTouchY = y;

invalidate();
break;
}

case MotionEvent.ACTION_UP: {
mActivePointerId = INVALID_POINTER_ID;
break;
}

case MotionEvent.ACTION_CANCEL: {
mActivePointerId = INVALID_POINTER_ID;
break;
}

case MotionEvent.ACTION_POINTER_UP: {
// Extract the index of the pointer that left the touch sensor
final int pointerIndex = (action & MotionEvent.ACTION_POINTER_INDEX_MASK)
>> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
final int pointerId = ev.getPointerId(pointerIndex);
if (pointerId == mActivePointerId) {
// This was our active pointer going up. Choose a new
// active pointer and adjust accordingly.
final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
mLastTouchX = ev.getX(newPointerIndex);
mLastTouchY = ev.getY(newPointerIndex);
mActivePointerId = ev.getPointerId(newPointerIndex);
}
break;
}
}

return true;
}
There are a few new elements at work here. We’re switching on action & MotionEvent.ACTION_MASK now rather than just action itself, and we’re using a new MotionEvent action constant, MotionEvent.ACTION_POINTER_UP. ACTION_POINTER_DOWN and ACTION_POINTER_UP are fired whenever a secondary pointer goes down or up. If there is already a pointer on the screen and a new one goes down, you will receive ACTION_POINTER_DOWN instead of ACTION_DOWN. If a pointer goes up but there is still at least one touching the screen, you will receive ACTION_POINTER_UP instead of ACTION_UP.

The ACTION_POINTER_DOWN and ACTION_POINTER_UP events encode extra information in the action value. ANDing it with MotionEvent.ACTION_MASK gives us the action constant while ANDing it with ACTION_POINTER_INDEX_MASK gives us the index of the pointer that went up or down. In the ACTION_POINTER_UP case our example extracts this index and ensures that our active pointer ID is not referring to a pointer that is no longer touching the screen. If it was, we select a different pointer to be active and save its current X and Y position. Since this saved position is used in the ACTION_MOVE case to calculate the distance to move the onscreen object, we will always calculate the distance to move using data from the correct pointer.

This is all the data that you need to process any sort of gesture your app may require. However dealing with this low-level data can be cumbersome when working with more complex gestures. Enter GestureDetectors.

GestureDetectors

Since apps can have vastly different needs, Android does not spend time cooking touch data into higher level events unless you specifically request it. GestureDetectors are small filter objects that consume MotionEvents and dispatch higher level gesture events to listeners specified during their construction. The Android framework provides two GestureDetectors out of the box, but you should also feel free to use them as examples for implementing your own if needed. GestureDetectors are a pattern, not a prepacked solution. They’re not just for complex gestures such as drawing a star while standing on your head, they can even make simple gestures like fling or double tap easier to work with.

android.view.GestureDetector generates gesture events for several common single-pointer gestures used by Android including scrolling, flinging, and long press. For Android 2.2 (Froyo) we’ve also added android.view.ScaleGestureDetector for processing the most commonly requested two-finger gesture: pinch zooming.

Gesture detectors follow the pattern of providing a method public boolean onTouchEvent(MotionEvent). This method, like its namesake in android.view.View, returns true if it handles the event and false if it does not. In the context of a gesture detector, a return value of true implies that there is an appropriate gesture currently in progress. GestureDetector and ScaleGestureDetector can be used together when you want a view to recognize multiple gestures.

To report detected gesture events, gesture detectors use listener objects passed to their constructors. ScaleGestureDetector uses ScaleGestureDetector.OnScaleGestureListener. ScaleGestureDetector.SimpleOnScaleGestureListener is offered as a helper class that you can extend if you don’t care about all of the reported events.

Since we are already supporting dragging in our example, let’s add support for scaling. The updated example code is shown below:

private ScaleGestureDetector mScaleDetector;
private float mScaleFactor = 1.f;

// Existing code ...

public TouchExampleView(Context context, AttributeSet attrs, int defStyle) {
super(context, attrs, defStyle);
mIcon = context.getResources().getDrawable(R.drawable.icon);
mIcon.setBounds(0, 0, mIcon.getIntrinsicWidth(), mIcon.getIntrinsicHeight());

// Create our ScaleGestureDetector
mScaleDetector = new ScaleGestureDetector(context, new ScaleListener());
}

@Override
public boolean onTouchEvent(MotionEvent ev) {
// Let the ScaleGestureDetector inspect all events.
mScaleDetector.onTouchEvent(ev);

final int action = ev.getAction();
switch (action & MotionEvent.ACTION_MASK) {
case MotionEvent.ACTION_DOWN: {
final float x = ev.getX();
final float y = ev.getY();

mLastTouchX = x;
mLastTouchY = y;
mActivePointerId = ev.getPointerId(0);
break;
}

case MotionEvent.ACTION_MOVE: {
final int pointerIndex = ev.findPointerIndex(mActivePointerId);
final float x = ev.getX(pointerIndex);
final float y = ev.getY(pointerIndex);

// Only move if the ScaleGestureDetector isn't processing a gesture.
if (!mScaleDetector.isInProgress()) {
final float dx = x - mLastTouchX;
final float dy = y - mLastTouchY;

mPosX += dx;
mPosY += dy;

invalidate();
}

mLastTouchX = x;
mLastTouchY = y;

break;
}

case MotionEvent.ACTION_UP: {
mActivePointerId = INVALID_POINTER_ID;
break;
}

case MotionEvent.ACTION_CANCEL: {
mActivePointerId = INVALID_POINTER_ID;
break;
}

case MotionEvent.ACTION_POINTER_UP: {
final int pointerIndex = (ev.getAction() & MotionEvent.ACTION_POINTER_INDEX_MASK)
>> MotionEvent.ACTION_POINTER_INDEX_SHIFT;
final int pointerId = ev.getPointerId(pointerIndex);
if (pointerId == mActivePointerId) {
// This was our active pointer going up. Choose a new
// active pointer and adjust accordingly.
final int newPointerIndex = pointerIndex == 0 ? 1 : 0;
mLastTouchX = ev.getX(newPointerIndex);
mLastTouchY = ev.getY(newPointerIndex);
mActivePointerId = ev.getPointerId(newPointerIndex);
}
break;
}
}

return true;
}

@Override
public void onDraw(Canvas canvas) {
super.onDraw(canvas);

canvas.save();
canvas.translate(mPosX, mPosY);
canvas.scale(mScaleFactor, mScaleFactor);
mIcon.draw(canvas);
canvas.restore();
}

private class ScaleListener extends ScaleGestureDetector.SimpleOnScaleGestureListener {
@Override
public boolean onScale(ScaleGestureDetector detector) {
mScaleFactor *= detector.getScaleFactor();

// Don't let the object get too small or too large.
mScaleFactor = Math.max(0.1f, Math.min(mScaleFactor, 5.0f));

invalidate();
return true;
}
}
This example merely scratches the surface of what ScaleGestureDetector offers. The listener methods receive a reference to the detector itself as a parameter that can be queried for extended information about the gesture in progress. See the ScaleGestureDetector API documentation for more details.

Now our example app allows a user to drag with one finger, scale with two, and it correctly handles passing active pointer focus between fingers as they contact and leave the screen. You can download the final sample project at http://code.google.com/p/android-touchexample/. It requires the Android 2.2 SDK (API level 8) to build and a 2.2 (Froyo) powered device to run.

From Example to Application

In a real app you would want to tweak the details about how zooming behaves. When zooming, users will expect content to zoom about the focal point of the gesture as reported by ScaleGestureDetector.getFocusX() and getFocusY(). The specifics of this will vary depending on how your app represents and draws its content.

Different touchscreen hardware may have different capabilities; some panels may only support a single pointer, others may support two pointers but with position data unsuitable for complex gestures, and others may support precise positioning data for two pointers and beyond. You can query what type of touchscreen a device has at runtime using PackageManager.hasSystemFeature().

As you design your user interface keep in mind that people use their mobile devices in many different ways and not all Android devices are created equal. Some apps might be used one-handed, making multiple-finger gestures awkward. Some users prefer using directional pads or trackballs to navigate. Well-designed gesture support can put complex functionality at your users’ fingertips, but also consider designing alternate means of accessing application functionality that can coexist with gestures.

code by : developers blog

Sep 12, 2011

Android How to check network status(Both Wifi and Mobile 3G)

public static boolean checkNetworkStatus(Context context) {

ConnectivityManager connectivty = (ConnectivityManager) context .getSystemService(Context.CONNECTIVITY_SERVICE);

TelephonyManager telephony = (TelephonyManager) context .getSystemService(Context.TELEPHONY_SERVICE);

NetworkStatus netStatus = new NetworkStatus(connectivty, telephony);

if (netStatus.isNetworkAvailable() == true) {

Log.e(" in checkNetworkStatus()", "network avalible");

return true;

} else {

Log.e(" in checkNetworkStatus()", "no network");

return false;

}

}


wifi-

----

void chkStatus()
  { 
 final ConnectivityManager connMgr = (ConnectivityManager)  
this.getSystemService(Context.CONNECTIVITY_SERVICE);   
final android.net.NetworkInfo wifi =  connMgr.getNetworkInfo(ConnectivityManager.TYPE_WIFI);
   final android.net.NetworkInfo mobile =  connMgr.getNetworkInfo(ConnectivityManager.TYPE_MOBILE);
   if( wifi.isAvailable() )
{
  Toast.makeText(this, "Wifi" , Toast.LENGTH_LONG).show();
  }  else if( mobile.isAvailable() )
{ 
 Toast.makeText(this, "Mobile 3G " , Toast.LENGTH_LONG).show();  }
  else  {Toast.makeText(this, "No Network " , Toast.LENGTH_LONG).show();} 
 }   }