Applied Technologies Dissertations and Theses
Permanent URI for this collection
Browse
Recent Additions
Item Enhancing forthcoming trend estimation in trading platform based on multiphase combination of sentiment analysis and LSTM(2024) Trehan, Mohita; Unitec Institute of TechnologySince its inception, the stock market has piqued the curiosity of researchers, and several attempts have been made to forecast its future patterns. The market is dynamic, and the value of an organisation’s stock price is influenced by a variety of elements, including previous stock data and public attitudes and opinions. Researchers have shown that the "public sentiment" which split them into two factions, one in favor and the other against is the most prevalent of these characteristics. It got simpler to analyse the relationship between them as Artificial Intelligence (AI) advanced and strong Machine Learning (ML) and Deep Learning (DL) algorithms were introduced. The number of outlets where the public can express their opinions has grown exponentially as the world entered the internet era. The researchers have experimented with several data sources, such as social networking websites, financial news websites, online global newsrooms, Yahoo finance websites, and other forums for exchanging opinions, in order to gather information about investor and public feelings as well as stock historical data. The forecasts have become more accurate over time as a result of testing various algorithms. Most of these studies have focused on larger stock markets, such as those in the USA, India, China, Japan, and Europe. Comparatively smaller stock markets cannot use the same strategies since they are virtually independent and heavily influenced by local news and public opinion, which might differ greatly from those of the rest of the globe. In this research, this gap has been addressed and focus has been made on the New Zealand Stock Exchange (NZX). This thesis explores the field of stock market forecasting with a particular emphasis on the NZX, a market that has received relatively little attention due to its distinct architecture and autonomous operations. In order to improve Future Trend estimation in trading platforms, this dissertation makes use of a multiphase machine learning and deep learning methods. The suggested method combines sentiment analysis with historical stock data from Yahoo Finance and financial news data from "sharechat.co.nz". Five of the largest New Zealand organisations stock data sets has been obtained. These companies has been selected because the textual data had more mentions of them than of others, which allows the sentiment scores to accurately reflect changes in the company. In this research, Future Trend prediction models are built employing deep learning techniques, specifically Long Short-Term Memory (LSTM) networks, and five different subprocesses. Across various sub-processes, the impact of sentiment news and stock history data on Future Trend estimates is examined. Furthermore, feature selection strategies are used to reduce the possibility of overfitting. A thorough literature analysis, a thorough explanation of the research methods, and the nuances of data collecting and pre-processing have all been included in the thesis format. It goes on to explain how the suggested prediction model was put together and clarifies the ramifications of the study’s findings. Therefore, the principal contributions of this thesis are outlined as follows: 1) development of an innovative multiphase framework for Future Trend prediction for NZX by integrating sentiment analysis data from regional news with historical stock data; 2) deployment and comparative evaluation of both LSTM and machine learning models namely RF and SVM throughout five sub-processes, showcasing their efficacy in forecasting future patterns for five well-known New Zealand businesses and insights into how well they work and how they could potentially leveraged for boosting the accuracy of stock market trend predictions; 3) analysis of the special characteristics of the NZX, emphasising how sensitive it is to regional news and public opinion in contrast to other markets; 4) demonstration of how pre-processing methods like feature selection and data normalisation enhanced the predictive models accuracy and effectiveness; 5) recognising the difficulties and developments in acquiring and applying highquality financial news data for NZ; 6) highlighting how the prediction framework aid in improving accuracy for Future Trend prediction being used for the same goal at various sub-processes.Item Analysis and reconstruction of distorted speech using deep learning(2024) Patel, Raj; Unitec, Te PūkengaCommunication through whispering is essential for individuals who have undergone laryngectomy, a surgical procedure involving removing part or all of the vocal box. Whispering is unique due to its absence of fundamental frequency (pitch) and can be considered as an alternative mode of communication for laryngectomy patients. While it is generally a quiet form of communication in healthy people, it often results in a hushed (and sometimes unintelligible) speech in laryngectomised individuals, necessitating the use of prosthetics or specialised treatments. Current prosthetic solutions have inherent limitations. The medical treatments pose a risk of getting infections after surgery, and the generated speech by prosthetics sounds mechanical, following these computational methods developed. Some state-of-the-art deep learning algorithms have been developed in computational methods to generate natural-sounding speech. However, they were focused on reconstructing whispered to normal speech, not on laryngectomised or distorted speech. This thesis focuses on analysing these Deep Learning algorithms using objective evaluation metrics and aims to apply these existing algorithms to a laryngectomised dataset for the first time in literature. We discuss the results of these evaluations and perform a comparative analysis between the models. We are starting our analysis with GAN-based models. Following this, we are moving to WESPER, a prediction-based model. Lastly, we are going to analyse the voice conversion-based models developed to convert speech from one speaker style to another and translate one language into another. Our initial analysis comprises 198 tests on 11 models and 6 objective evaluation metrics. This evaluation is going to be done on the testing dataset, which has three patient categories, namely Partial Laryngectomy (PL), Total Laryngectomy (TL), and Total Laryngectomy with Trachea Esophageal Puncture(TLTEP). Based on the results of such evaluation, I propose modifications to the architecture of five GAN-based models. I particularly adjust the the models and loss functions to improve the outcome for laryngectomy patients. In addition, they are cross-compatible with each other and propose a total of 25 models. For a better understanding of the features of laryngectomised speech, we are including the laryngectomised dataset combined with the wTIMIT dataset in the training process. Using the same set of objective evaluation metrics, these proposed models demonstrate a better denoising effect in reconstructed speech, spectral features, and intelligibility than the existing models.Item Predicting wildfires from satellite images using deep learning(2024) Mammen, Blesson; Unitec, Te PūkengaDetecting the possibility of wildfires early can help individuals and organizations respond appropriately and minimise the potential damage caused. This report investigates the use of MobileNetV3 for predicting the occurrence of wildfires from satellite images that do not contain visible wildfire spots. The paper delves into current research and highlights the value of satellite imagery as a homogeneous data sources for wildfire prediction. A thorough evaluation and comparison of MobileNetV3’s performance with larger, more complicated models like ResNet50 and VGG19 is conducted. The findings demonstrate MobileNetV3’s efficacy in balancing computational efficiency with predictive power, offering a lightweight yet effective alternative to traditional models. By examining the potential of lightweight neural networks in managing complex and difficult environmental data, this paper advances wildfire prediction methodologies especially in resource constrained contexts. This research focuses on a key challenge: developing a model designed to predict small wildfires from satellite images that do not contain any visible wildfire spots. This is achieved by training the model on a dataset comprising satellite images of areas where wildfires with a small wildfire spot size just greater an 0.01 acres have occurred. Our approach addresses a critical gap in wildfire management by focusing on predicting these small-scale fires without needing heterogeneous data collection. The ability to predict small wildfires from non-fire satellite images enhances the accuracy and utility of early warning systems. Using satellite images as a single source of data ingestion removes the need for heterogeneous data collection involving soil and atmospheric data as well as vegetative and geological data. It allows for the implementation of targeted preventive measures, such as controlled burns, the creation of firebreaks, and stricter fire bans during high-risk periods. Motivated by the capacity to generalize and the possibility to overcome challenges posed by cloud cover, haze, and diverse landscapes, this research uses MobileNet V3, a deep learning model based on transfer learning to predict wildfire occurrence from satellite images. MobileNetv3 model has given promising results, with a recall of 92 percent and an accuracy of over 82 percent. Despite being a lighter model, MobileNetv3 has shown robust results when evaluated alongside heavier models like Resnet50 and VGG19.Item A development framework for software integration projects – case study: Web app Integration with OpenWeather API(2023) Thirunahari, Siddartha; Unitec, Te Pūkenga; Te PūkengaRESEARCH QUESTIONS Q1. What are the key stages of the Software Development Life Cycle (SDLC) implementation in software integration projects, and how do they contribute to the successful integration of software systems? Q2. In what ways does the development process of software integrations differ from conventional software development approaches, and what specific considerations are essential for ensuring effective integration? Q3. How can the type of integration in software integration projects be determined, considering factors such as system compatibility, data interchange requirements, and integration architecture? Q4. What approaches and methodologies can be employed to ensure the overall quality of the final software integration product, with specific emphasis on functionality, efficiency, and maintainability, as well as adherence to industry standards and best practices? ABSTRACT Software development has experienced rapid growth and advancement, leading to the adoption of state-of-the-art designs, methods, techniques, and tools to deliver high-quality software solutions. However, proficiency in programming alone is insufficient for ensuring reliable, feasible, cost-effective, and high-quality software products. Developers must consider various aspects of the software development life cycle (SDLC) to enhance software solutions. This includes analytical and critical thinking skills, envisioning real-world business cases, and emphasizing quality assurance through thorough testing. Furthermore, cost-effective software solutions can be achieved through detailed project requirements and scope, outsourcing options, sound project planning, and agile methodologies for handling requirement changes. A gap identified in software development is the lack of a comprehensive and generic integration framework for software systems integration. Such a framework would provide developers with the necessary knowledge and resources to successfully undertake industrial software integration projects and deliver leading solutions. Without proper guidance, developers may face challenges in upskilling themselves in software integration. This research proposes a generic integration development framework to produce and deliver high-quality software integration solutions. The framework offers step-by-step guidance throughout each phase of the SDLC, drawing insights from real-world software integration projects. A case study is conducted to demonstrate the application of the proposed framework in a software integration project. The case study involves the development of a small-scale web application using the ReactJS front-end development framework, integrating with the OpenWeather API to retrieve weather forecasting data. The proposed framework is evaluated through a comparative analysis of selected software quality factors: functionality, efficiency, and maintainability. Functionality encompasses accuracy of data fetched through API requests and API security, efficiency focuses on optimal resource utilization for application performance, and maintainability emphasizes code maintainability for future improvements. The analysis validates the improved reliability achieved by implementing the proposed software integration framework and highlights the quality of the final product compared to the pre-framework implementation. By bridging this research gap and providing a generic integration framework, this research contributes to the advancement of software integration practices. It equips developers with the understanding and insights required to excel in software integration projects, ultimately leading to the development of reliable and high-quality software solutions.