WEEK 1
Date: 17/10/2021-23/10/2021
Objective:
- Understand the task for Final Year Project 1
- Contact FYP supervisor.
Activity:
Attended briefing session of FYP that delivered by Dr Mohammad Ikhwan Jamaludin. Briefed about the course information and the steps that need to be completed throughout this semester. Check FYP website for the list of supervisors and make an appointment with supervisor as soon as possible to discuss on the project title. Contact Dr Nasrul in WhatsApp who is my supervisor for my Final Year Project. He asks us to find what is our interest for out project title as long as it could help others and high demand in market.
Achievement:
- Could focus on the task as the guideline is already given and easy to follow.
- Had a supervisor to guide, to ask and to refer at.
WEEK 2
Date: 24/10/2021-30/10/2021
Objective:
- Find interest research by following the guided provide.
- Find goals and my contribution for this project.
Activity:
This project is to help human in needs especially visual impairment person who needs guidance to recognize thing or navigate. I will explain the addressed problems, things to improve and goals of my project. From the reading materials that I have found, the main problem in my research is to know the effectiveness and the performance of the device. In that case, I decided to find the best method, to make this project to be uncomplicated and easy to access thus, this could make it a user friendly device. As for the goal of the project, it is to study the method of object detection that is resistant to lighting conditions and the angle. It could help the human in needs specifically for visual impairment person.
Achievement:
- Easy to find research papers as I have my vision and goals.
- Prepare 3 keywords that will be related to my FYP.
WEEK 3
Date: 31/10/2021-06/11/2021
Objective:
- Prepare and find details and title to fill in the topic proposal form
- Submit student’s project titles and details through online form.
Activity:
Proposed Topic: Assistive spectacle camera recognition of general object for visual impairment person.
Proposed objectives:
- To generates the image recognition that could detect and identify object as a feature in human-machine interaction that allows more natural interaction without the use of complex devices.
- To understand the method that is suitable to complete the project successfully as it is a state of the art approach to solve problems on image recognition although facing necessity of huge data sets.
- To study and analyse the problem and challenge generally approached in various ways, with different kinds of results and complexity.
Scope: This research will use image processing method and aimed electrical property is density of states.
Achievement:
- Complete my proposal topic for my project.
- Get a clear vision on how my project going to work.
WEEK 4
Date: 07/11/2021-13/11/2021
Objective:
- State 6 source references by using keyword that I have prepared.
Activity:
- Sankaranarayanan, A., Veeraraghavan, A., & Chellappa, R. (2008). Object Detection, Tracking and Recognition for Multiple Smart Cameras. Proceedings of the IEEE, 96(10), 1606–1624. https://doi.org/10.1109/jproc.2008.928758
- Kumar, N. M., Singh, N. K., & Peddiny, V. K. (2019). Wearable Smart Glass: Features, Applications, Current Progress and Challenges. Second International Conference on Green Computing and Internet of Things (ICGCIoT), 577–582.
- Agarwal, R. (in press). Low Cost Ultrasonic Smart Glasses for Blind. Low Cost Ultrasonic Smart Glasses for Blind.
- Kim, J. H., Kim, S. K., Lee, T. M., & Lim, J. (2020). Smart Glasses using Deep Learning and Stereo Camera. IEEE 8th Global Conference on Consumer Electronics (GCCE), 294–295.
- Arora, A., Grover, A., Chugh, R., & Reka, S. S. (2019). Real Time Multi Object Detection for Blind Using Single Shot Multibox Detector. Wireless Personal Communications, 107(1), 651–661. https://doi.org/10.1007/s11277-019-06294-1
- Mahapatra, S. (2021). Real Time Object Detection Using YOLO v3 Tiny with Voice Feedback for Visually Impaired. International Journal for Research in Applied Science and Engineering Technology, 9(5), 1650–1653. https://doi.org/10.22214/ijraset.2021.34538
Achievement:
- Manage to find 6 first article related to my project
WEEK 5
Date: 14/11/2021-20/11/2021
Objective:
- Summary of the literature review
Activity:
Object Detection, Tracking and Recognition for Multiple Smart Cameras
[Method proposed: Multiple Smart Camera]
This paper is focusing on the problems of distributed detection, tracking, and recognition. It introduces the basics of projective geometry and discuss some of the concepts that are extensively used for detection and tracking. It shows that the presence of a ground plane can be used as a strong constraint for designing efficient and robust estimators for target location. It demonstrated how 2-D appearance models and 3-D shape and texture models can be used for recognition of objects. It will need to adapt some of the algorithms presented here in order to tackle the same detection, tracking, and recognition problems in camera networks containing possibly hundreds of cameras.
Wearable Smart Glass: Features, Applications, Current Progress and Challenges
[Discussion: Differentiate Difference Between Different Smart Glasses Features]
This study aims to explore the applications and challenges of augmented reality based smart glass. Among the recent inventions, smart glass is one of the wearable device typically referred to be switchable glass that is capable of handling a wide range of computing activities that an ordinary human cannot do. In this paper, insights into the smart glass and its design factors were highlighted. Moreover, its features and various commercially available smart glasses were carefully studied.
Low Cost Ultrasonic Smart Glasses for Blind
[Method proposed: Ultrasound sensor]
Information from the sensor about the obstacle distance and processes the information according to the coding done and sends the output through the buzzer, power supply is given to the central unit which distributes the power to different components. These smart glasses are very easy to use and very simple to understand. If a blind use it for [2,3] times he/she will understand the working and can handle it.
Achievement:
- Gain lot more information and help from readings.
WEEK 6
Date: 21/11/2021-27/11/2021
Objective:
- Summary of the literature review
Activity:
Smart Glasses using Deep Learning and Stereo Camera
[Method proposed: Deep Learning and Stereo Camera]
Deep learning algorithm cannot be performed on low level MCU of smart glasses since this algorithm compute with a lot of data. The driver is informed that the location of the blind user through the buzzer and LED. The stereo cameras are used to calculate the distance between the blind and the obstacle. The vibration motor and the buzzer operate according to the distance from the obstacle. The YOLO v3 algorithm were used to recognize obstacles. This network is the base feature extractor of Darknet-53. YOLO v3 performs multilevel classification for objects detected in images.
Real Time Multi Object Detection for Blind Using Single Shot Multibox Detector
[Method proposed: Single Shot Multibox Detector]
This assistant is an alert system that captures the surrounding view of the blind person and processes it in real time with frame rate of 60 FPS (frames per second) to detect the objects and guide the subject accordingly. Then the information is sent through a text to speech conversion and the output is fed through speech signals to the earphones connected. The idea is that blind person focuses on objects that are in front and close to him/her. If manufactured as a product, this technology can assist blind people to aid their mobility at an affordable price and can match with future technology as this field is still in view of research and evolving.
Real Time Object Detection Using YOLO v3 Tiny with Voice Feedback for Visually Impaired
[Method proposed: You Only Look Once (YOLO) v3 with Google Voice Feedback]
There are three object detectors namely: Shot Detector (SSD), R-CNN, Fast R- CNN, and Faster R-CNN, Single YOLO. This project uses the sense of hearing to visualize the objects in the surrounding using the “You Only Look Once: Unified, Real-time Object Detection” algorithm trained on the COCO dataset to identify the object present before the person thereafter the label of the detected object is translated to audio by the aid of Google Text to Speech which will be the expected output. This project made with the help of Deep Leaning and Raspberry pi will greatly help visually impaired individuals to the great extent by acting as a tool connecting them to the world and surpassing their disability of vision.
Achievement:
- Gain lot more information and help from readings.