Upload
mamoon-ismail
View
88
Download
0
Embed Size (px)
Citation preview
1 | P a g e
Help!
2 | P a g e
Android CS9033
Software Design Document
CS 9033: Mobile Application Development
Mamoon Ismail Khalid
Trishla Shah
Chai-Chi
3 | P a g e
DATE VERSION DESCRIPTION
11/15/2015 1 Initial Design Document
12/20/2015 2 Final Design Document
4 | P a g e
Table of Contents
Topic Page
1. Problem Statement and Solution 1
2. Current Solution & Related Work
3
3. Solution
5 3. Use Cases 6
4. MVC Framework 7
5. Other Components 10
6. Database Design and Schema
7. User Interface Screens
14
8. Features Built and Future Developments
Future Development
24
9. Project Member Breakdown
25
10. Project Milestones
27
11. Bibliography
7. Bibliography
28
28
5 | P a g e
1.Problem Statement and Introduction
Most of us who have lived in “the big cities” for all our lives, we feel safe when we are out for a night
on the town. But traveling alone through certain areas, especially late at night, often makes everybody
nervous. We would feel a lot safer, generally, if there was someone there making the walk with us.
Thankfully, if you own a mobile device, you’re never truly alone. If you have an ongoing concern for
your personal safety, use our Help app.
In case of a medical emergency. For example, a heart attack strikes when no is around. You don’t
have time to think or do anything but scream once for help and this activates the app. It saves lives!
In case of a robbery, mugging or unsolicited behavior like a rape. These situations do not give enough
time to do or say anything .But luckily you involuntarily happen to say “Help!” and this triggers the
app and you may be saved. Got into a road accident in a lonely area? Have a few minutes before
losing consciousness, you got to say “Help”. And you will be saved. Realize your brakes are failing?
Shout for help and let your friends know you are in need.
When there is a natural calamity. Disaster strikes and there is no way to get in touch with people. You
are stuck somewhere and maybe your phone is a little far away. Scream “Help” and you may be
saved. Someone following you late at night. See something suspicious going on. Whisper “Help”.
What if you are a realtor? This app is so useful whilst showing houses to unknown people.
The best time to prepare for a disaster is BEFORE! No one likes to think about emergencies and
disasters, but you need to be prepared. If you are caught in a dangerous situation, you won’t have the
time to open your phone, make a call, inform someone where you are exactly. This is the problem we
are trying to solve. This app will help keep you and your loved ones safe by making sure you have
someone who knows where you are.
This app consists of a sign up for every user of the phone .The user must enter his/her emergency
contacts which is stored in the app. When the user of the phone shouts or even says the word “Help”,
a message is sent out to his/her emergency contacts regarding his/her location. This therefore
commands safety of the user of the phone and app. His/her whereabouts are known and she/he can
be saved
6 | P a g e
The UI of this app is very simple and easy to use .So even elders who may be technically challenged
can set up the app. The false alarms are taken care of. Whenever the sound “Help” is heard and the
user does not want the message to be sent out .There is a time delay of one minute within which the
user must enter a 4-digit pin onto the screen, in order to stop the message from sending.
2.Current Solutions and Related Work
HelPls
HelPls which helps send SMS to your friends or family or police with your location through GEO
code and Direct Map. In order to launch the app, users have to shake their phones and press some
buttons to send SMS. However, the truth is that when users are in emergency, they don’t have time
to do all the things above. Sometimes, doing that puts them in more dangerous place. So, our feature
is that users use voice to launch the app makes it easier to use in an emergency. Also false alarms are
taken care of with the 4-digit pin code to stop the sending of the SMS.
Kitestring
Kitestring which is a web-based service that checks in with users and sends an alert to pre-selected
contacts. Kitestring works through users’ inaction. Users need to visit the free service’s website at
kitestring.io to customize their SOS message and activate an SMS message broadcast that will be sent
out in 30 minutes, 2 hours, 5 hours or 12 hours. Once the time is up, users get a check in message and
if for some reasons they’re unable to respond to that text withing 5 minutes, Kitestring will let their
contacts know. Our feature is different from it because we don’t use web-based service. We set up
7 | P a g e
our own database and store the information. And we don’t have check in messages to assure if we are
safe or not. Check in messages would be annoying to a user.
SafeTrek
SafeTrek which lets users alert the police when they are in an unsafe situation, but with a failsafe in
case they don’t need help. When users launch the app, they place their thumbs on the Safe button. If
they release their thumbs off the button, they are asked to enter a 4-digit code. If they don’t enter the
code, the police are notified. If they do, nothing happens. Our feature is that we’ll get an icon on the
tool bar to notify us the SMS will be sent in 5 mins. And we can can cancel it within that time. This
way, some unpurposed SMS reduce.
Red Panic Button
Allows users to send their GPS location to pre-set emergency contacts with just one touch. Our app
consists of voice recognition which helps the user to not touch anything in the app. No mechanism to
abort false alarms.
Social Alert
It sends out a message to your friends and relatives in case of an emergency or crime. Integrating with
social networks most of us already use such as Facebook, Twitter and text messages, you can send out
a custom emergency message with your exact location, helping authorities pinpoint exactly when and
where you are. This again requires the user to press a button. We are using voice recognition instead.
It does not have any way of aborting false alarms.
8 | P a g e
3.Use Cases
1. In case of a medical emergency. For example, a heart attack strikes when no is around. You don’t
have time to think or do anything just scream once for help and this activates the app. It saves lives!
2. In case of a robbery, mugging or unsolicited behavior like a rape. These situations do not give
enough time to do or say anything .But luckily you involuntarily happen to say “ Please Help Me! “
and this triggers the app and you may be saved.
3. When there is a natural calamity. Disaster strikes and there is no way to get in touch with people.
You are stuck somewhere and maybe your phone is a little far way. Scream “Please Help Me” and
you may be saved4. Someone following you late at night. See something suspicious going on.
Whisper “Please Help Me”
5. Realtors or any professionals who need to network and meet people who are unknown, require a
safety measure. This app can be used by them.
6. When you meet with an accident and maybe no one is around, use the app to send your location
and you are saved
9 | P a g e
4.Architecture Diagram The Database and CMU dict interacts with the entire application
The background Service Acitivity is using GPS
The setProfileActivity,EditUserDetails,MyDetails all interact with the SQLiteDatabase
and Contact book
The Main Activtity interacts with the SMS application of the phone and GPS and
background
Serivce Acitivity as well as all other Acitivities
10 | P a g e
5.MVC FRAMEWORK
Models:
Person: This is a sign up page for the person who downloads the app and wants to use it.
He/She must enter Name and Phone number. We store this information in a database
Views:
Main.xml : The layout where the Audio Recognizer is running in the background. It is the
main screen view
activity_setprofile.xml : This is the Create Account xml file where the user is prompted to set
up the profile the first time the user installs the app and also adds their Emergency Contacts
Mydetails.xml : Where the user can view their own input information and that of their
Emergency Contacts.
Edit.xml :. This is the layout where the user can edit his Emergency Contacts and his own
information.
Controllers:
MainActivity
This is the main activity .This also consists of a speech to text convertor code. This is used to
generate an alert message sent out to emergency contacts when a person screams for the unique
phrase to trigger the Help Application. The sound is taken in through the microphone of the phone
and this code will set up the scenario for sending the message.
SetProfileActivity
The name and phone number of the phone user is added here. We use TextEdits. The user is
asked to set up their password as well .This code is used when there is a false alarm and enables
the user to abort the alert. All this is stored in a database for future access. There is a button to
access the contact book and you can add contacts into your app .The database helper is used to
store the contacts in the database. There is a SET profile button and a Home button to go back to
the Main Acvtivity. There is a field that asks the user to set up their customizable message that
is sent out as an alert text to their emergency contacts.
DatabaseHelper
This java class works to sync up the user information and the emergency contacts information in
the SQLite Database.
11 | P a g e
MyDetails
This opens up the UI for editing your details and that of your Emergency contacts in the
Database.The user can add or delete contacts using the phonebook as well. You can also change
the customized message sent within the alert text.
EditUserDetails
This will enable you to see and change the people you to add as emergency contacts. Just a
Textview of all the people. You can select a contact and delete it with a button Delete. This will
contain Go back to main activity button and also Add contact button. The database is accessed
and we can see a listView of all the people you have added as contacts. You can also edit your
own name and phone number within this activity. Name and Phone number as well as the
password set up by the user can be edited using this Activity. These are stored in a database and
we will overwrite these values in the database.
BackgroundService
This activity gives the GPS coordinates of the person seeking help. This activity consists of a
GPS running as a service in the background in order to retrieve the users current location in order
to be sent out within an SMS.
SplashScreen/ImageView in Main Activity
This is used to just display the splash screen for 3 s when the app is started and the recognizer for
the voice recognition is being set up. It uses a Handler to run this splash screen for 3000ms.
12 | P a g e
6.Other Components:
Async Task in Main Activity: This AsyncTask launches a Listener for the Audio. Recognizer
in the background to match the phrase that you enter in the app.
List Adapters: Simple List adapters to show the emergency contacts as you add then and when
you want to view them
CMU-Dict: This is the dictionary that the recognizer in the main activity uses to match the words
with your phrase. It has a collections of thousands of words and its associated pronunciations. We
used this to create a unique SOS Phrase that is not confused with simple, ordinary life
conversational sentences. We used “Please Help me “in the end because if we used anything that
started with “I……” it would send it a lot of false alarms.
CMU Pocket Sphinx
Library is distributed as architecture-independent pocketsphinx-android-5prealpha-nolib.jar and
binary .so files for different hardware architectures.
In Android Studio you need to place jar file in app/libs folder and jni .so files
into app/src/main/jniLibs folder.
We put the resource asset files in assets/ directory of your project. But in order to make them available
for pocketsphinx files should have physical path, as long as they are within .apk they don't have one.
Assets class from pocketsphinx-android provides a method to automatically copy asset files to external
storage of the target device. edu.cmu.pocketsphinx.Assets#syncAssetssynchronizes resources reading
items from assets.lst file located on the top assets/.
This makes us of hashing and MD5. PocketSphinxAndroidDemo contains ant script that
generates assets.lst as well as .md5 files, look for assets.xml
We made use of the CMU Pocket Sphinx speech recognizer. It uses g streamer to automatically split
the incoming audio into utterances to be recognized, and offers services to start and stop recognition.
Currently, the recognizer requires a language model and dictionary file. These can be automatically
built from a corpus of sentences using the Online Sphinx Knowledge Base Tool. Example launch files,
language models, and dictionary files can be found in the demo directory of the package.
The voice_cmd example controls a mobile base using commands such as "move forward" or "stop".
The robocup example uses some of the standard items and names from the
RoboCup@Home contest, for instance, it should recognize "hello, my name is Michael" or "Bring
me the Original Pringles"
13 | P a g e
How the word recognition works:
In example above trigram “is one of” is queried. Each node contains “next pointer” to the beginning of
successors list. Separate array for each ngram order is maintained. Each node contains quantized
weights: probability and backoff. Nodes are stored in bit array, i.e. minimum amount of bits required
to store next pointers and word IDs are used.
In the main activity that uses the pocket Sphinx library, there is a SpeechRecognizer implemented and
encapsulated within an Async Task by us.
public void onResult(Hypothesis hypothesis) {} that’s is called when we end the application
ourselves.
public void onEndOfSpeech() {} that’s called if the recognizer discovers an end of speech.
Both displayed the input audio to a string and then matches the input audio string with the
KEYWORD_PHRASE which we have preloaded into the application as “Please Help Me”. We used
this audio-to-string feature of the pocketshpnix recognizer and used it to trigger out application into
doing other stuff such as shring location, sending message and prompting the password for SMS abort.
We found this approach much better than using AT&T API as first suggested by the professor, but
PocketSphinx Recognizer solves our purpose in a much less complicated manner.
14 | P a g e
7.Database Design and Schema
The DatabaseHelper class is consists of all functions that the application uses to store data in tables. A
local SQLiteManager is used for the database. It consists of the following tables to store data specific
to the users as follows:
1. userDetails
This table is used to store details that are relevant to the user. CREATE TABLE userDetails
(
fname TEXT,
lname TEXT,
contact TEXT PRIMARY KEY,
password TEXT
)
2. emergencyContacts
This table is used to store all emergency contacts that the user wants. Whenever needed, a
message is sent only to these contacts with a customizable message and location of the user.
CREATE TABLE emergencyContacts
(
name TEXT,
contact TEXT,
PRIMARY KEY(name, contact)
)
3. phraseTable
This table is used to store phrases for the user. Based on the user preferences, the phrase to be
sent to emergency contacts can be added or deleted from this table.
CREATE TABLE phraseTable
(
id TEXT Primary Key ,
userPhrase TEXT ,
FOREIGN KEY (id) REFERENCES userDetails (fname)
)
15 | P a g e
8.Attempt to run Voice Recognition as a
Background Service
We tried to implement the running of the recognizer as a background service using the
following Activities.
We were successful in getting a runnable task even when the app was not open as shown in
the screenshot below.
The "testttttttttttttttt" is a string in the background service task and shows that the Runnable
MyTask is called every few seconds even when the app is not open.
This was implemented by making use of the service and binding and messaging.
16 | P a g e
17 | P a g e
( Reference: Professor Jeffrey Bickford’s Slides)
18 | P a g e
9.USER INTERFACE SCREENS
1. The Main Activity Screen with a Create A/C button to set up the user profile.
2. This is the UI for setting up of the profile for the user.
19 | P a g e
3. Home Screen after account is created with changed button name MY DETAILS
4. This is the Viewing of details set up by the user
20 | P a g e
5. My Details screen for entering your name, phone number, the message content.
6. Pick Emergency Contact from Phonebook
21 | P a g e
7. View emergency Contacts screen
22 | P a g e
8. Delete Emergency Contacts screen
9. Password Abort pop-up when Phrase detected
23 | P a g e
10. Text message sent. Both GPS ON and GPS OFF messages shown here.
24 | P a g e
10. Features built already and future
developments
Things implemented since last demo:
1. Customized the message that the user can send out as a text to his/her emergency contacts.
2. Changed the UI. Added animation translates to the buttons, changed colors and added a ripple effect.
3. Gave a larger delay for the password and abortion of sending of the alert.
4. Executed a code for background service in a runnable task but could not integrate it within the
project.
Feedback Received during final DemoPoster Presentation:
1. Having a database on a web server instead of using a local database. This would be helpful when
the user is using multiple devices and needed it on every device. This way he would not be required
to enter his details separately on every device.
2. Display a countdown timer when the password box springs up for message abort option. This would
allow the user to see how much time they have left to prevent a false positive being sent out.
3. Running the voice recognition as a background service so the user wouldn’t have to open the app
to send out an emergency notification (We already tried this out and have detailed why this was not
working to the TA and the professor during the demo).
4. Following up with 3), if the app is running in the background, the user should be notified by using
the phone vibration (or a ringtone) if the application is triggered and the message is about to be sent
out.
Future Developments:
1. App recognizes your voice even when the phone is locked and the voice recognition is running as
a background service and the microphone is always listening.
2. Sending emergency message via WhatsApp, Facebook and other social networking apps.
3. Video/Audio Recording of events taking place once the user shouts for help which can be used as
evidence later on.
4. Customize phrase used for sending out the alert message
25 | P a g e
11. PROJECT MEMBER BREAKDOWN
5.1 Chai-Wei Chang
Asynctasks for the Audio Listener and Recognizer .Implementation of Pocketsphinx
library in the Main Activity
Background Service Activity which runs GPS in the background.
Using Google Voice API implemented ‘on button’ touch voice recognition (not used
finally)
Design Document 1-(Current Solutions and Architecture Diagram)
Enabled user to set their personal information which also includes adding multiple
contacts without using database (modified finally and not used)- SetProfileActivity
,Person Model
Stored voice into files every one minute in the background service (however, we couldn’t
figure how using AT&T API , so we don’t use this service)
5.2 Mamoon Ismail Khalid (Team Leader)
Led the application development – what features to be added
Worked with Chai-Wei Chang on the voice recognition in the Main activity and its
integration into the project.
Assisted Chai-Wei Chang on his research of the voice recognition using AT&T API
Found the idea of wrapping up the application so as to handle any unhandled
exception and tested the app for any conditions where it may crash and rectified the same
Tested the phrase sequences that work for the app using the CMU Dictionary that Pocket
Sphinx uses and found that Please Help Me is the unique phrase and anything that starts
with “ I ……” may cause false alarms.
Poster Presentation
Design Document (final)
Presentation (first prototype)
Splash Screen image view part of UI
26 | P a g e
5.3 Trishla Shah
Researched and found the Pocket Sphinx Library and found their base demo on GitHub
implemented in the project.
Added to the main activity the function of displaying an alert dialog box to abort false alarms.
Added to the main activity the function of sending an SMS with GPS location on voice
recognition.
Implemented a database helper and used it to store user information and emergency contacts-
Database Helper.
The activity for setting up a profile and adding emergency contacts – SetProfileActivity and a
model Person.
The activity for editing entered details of the user in case they need to change their contacts or
edit their other details – EditUserDetails.
The activity for displaying to the user their details and emergency contacts – MyDetails.
Final integration and testing of the voice recognition in the main activity with the rest of the
project.
Design Document 1
Assisted Mamoon with Poster presentation
Design Document (final)
Front end UI screens: layouts, styles, buttons, TextViews, colors EditViews,
animations, ripple effect, Splash Screen.
Attempted to send push notifications to users using Google Cloud Messaging but failed
to do so.
27 | P a g e
11. PROJECT MILESTONES
DATE TASKS COMPLETED
10/18/2015
o
Initial Design Document
4/2/2014
o
Base User Interface completion o Standalone functionality
implementation
o Update design document
11/2/2015
o
Initial Prototype Presentation
12/6/2015
o
Initial wiring of android application with amazon web
service o Refining and Testing of UI
4/15/2015
o
Integration of android application with the web server
CMU-Dictionary and final prototype ready for
demonstration.
12/15/2015
o
Presentation at LC400
28 | P a g e
12. BIBLIOGRAPHY
1. “The CMU Pronouncing Dictionary.” http://www.speech.cs.cmu.edu/cgi-bin/cmudict
2. “APKtool documentation and wiki” https://code.google.com/p/android-apktool/
3. “Speech Recognizer without using PocketSphinx”
http://developer.android.com/reference/android/speech/SpeechRecognizer.html
4. “The PocketSphinx Library”
http://sourceforge.net/p/cmusphinx/discussion/help/thread/4ee9e2b5/ http://sourceforge.net/p/cmusphinx/code/HEAD/tree/trunk/pocketsphinx-
android/jni/Android.mk
5. “Android-er” Apply Animation on Button
http://android-er.blogspot.com/2012/02/apply-animation-on-button.html
6. “How to Send Push Notifications Using GCM Service?” Android Tutorial Blog N.p 22
Sept. 2014. Web http://programmerguru.com/android-tutorial/how-to-send-push-
notifications-using-gcm-service
7. “Dialogs.” Dialogs
http://developer.android.com/guide/topics/ui/dialogs.html
8. “SmsManager.” SmsManager
http://developer.android.com/reference/android/telephony/SmsManager.html