Search results for: data pipeline
23218 Blockchain in Saudi E-Government: A Systematic Literature Review
Authors: Haitham Assiri, Priyadarsi Nanda
Abstract:
The world is gradually entering the fourth industrial revolution. E-Government services are scaling government operations across the globe. However, as promising as an e-Government system would be, it is also susceptible to malicious attacks if not properly secured. This study found out that, in Saudi Arabia, the e-Government website, Yesser is vulnerable to external attacks. Obviously, this can lead to a breach of data integrity and privacy. In this paper, a Systematic Literature Review was conducted to explore possible ways the Kingdom of Saudi Arabia can take necessary measures to strengthen its e-Government system using Blockchain. Blockchain is one of the emerging technologies shaping the world through its applications in finance, elections, healthcare, etc. It secures systems and brings more transparency. A total of 28 papers were selected for this SLR, and 19 of the papers significantly showed that blockchain could enhance the security and privacy of Saudi’s e-government system. Other papers also concluded that blockchain is effective, albeit with the integration of other technologies like IoT, AI and big data. These papers have been analysed to sieve out the findings and set the stage for future research into the subject.Keywords: blockchain, data integrity, e-government, security threats
Procedia PDF Downloads 25023217 Geospatial Information for Smart City Development
Authors: Simangele Dlamini
Abstract:
Smart city development is seen as a way of facing the challenges brought about by the growing urban population the world over. Research indicates that cities have a role to play in combating urban challenges like crime, waste disposal, greenhouse gas emissions, and resource efficiency. These solutions should be such that they do not make city management less sustainable but should be solutions-driven, cost and resource-efficient, and smart. This study explores opportunities on how the City of Johannesburg, South Africa, can use Geographic Information Systems, Big Data and the Internet of Things (IoT) in identifying opportune areas to initiate smart city initiatives such as smart safety, smart utilities, smart mobility, and smart infrastructure in an integrated manner. The study will combine Big Data, using real-time data sources to identify hotspot areas that will benefit from ICT interventions. The GIS intervention will assist the city in avoiding a silo approach in its smart city development initiatives, an approach that has led to the failure of smart city development in other countries.Keywords: smart cities, internet of things, geographic information systems, johannesburg
Procedia PDF Downloads 14923216 Language Errors Used in “The Space between Us” Movie and Their Effects on Translation Quality: Translation Study toward Discourse Analysis Approach
Authors: Mochamad Nuruz Zaman, Mangatur Rudolf Nababan, M. A. Djatmika
Abstract:
Both society and education areas teach to have good communication for building the interpersonal skills up. Everyone has the capacity to understand something new, either well comprehension or worst understanding. Worst understanding makes the language errors when the interactions are done by someone in the first meeting, and they do not know before it because of distance area. “The Space between Us” movie delivers the love-adventure story between Mars Boy and Earth Girl. They are so many missing conversations because of the different climate and environment. As the moviegoer also must be focused on the subtitle in order to enjoy well the movie. Furthermore, Indonesia subtitle and English conversation on the movie still have overlapping understanding in the translation. Translation hereby consists of source language -SL- (English conversation) and target language -TL- (Indonesia subtitle). These research gap above is formulated in research question by how the language errors happened in that movie and their effects on translation quality which is deepest analyzed by translation study toward discourse analysis approach. The research goal is to expand the language errors and their translation qualities in order to create a good atmosphere in movie media. The research is studied by embedded research in qualitative design. The research locations consist of setting, participant, and event as focused determined boundary. Sources of datum are “The Space between Us” movie and informant (translation quality rater). The sampling is criterion-based sampling (purposive sampling). Data collection techniques use content analysis and questioner. Data validation applies data source and method triangulation. Data analysis delivers domain, taxonomy, componential, and cultural theme analysis. Data findings on the language errors happened in the movie are referential, register, society, textual, receptive, expressive, individual, group, analogical, transfer, local, and global errors. Data discussions on their effects to translation quality are concentrated by translation techniques on their data findings; they are amplification, borrowing, description, discursive creation, established equivalent, generalization, literal, modulation, particularization, reduction, substitution, and transposition.Keywords: discourse analysis, language errors, The Space between Us movie, translation techniques, translation quality instruments
Procedia PDF Downloads 21923215 A Coupling Study of Public Service Facilities and Land Price Based on Big Data Perspective in Wuxi City
Authors: Sisi Xia, Dezhuan Tao, Junyan Yang, Weiting Xiong
Abstract:
Under the background of Chinese urbanization changing from incremental development to stock development, the completion of urban public service facilities is essential to urban spatial quality. As public services facilities is a huge and complicated system, clarifying the various types of internal rules associated with the land market price is key to optimizing spatial layout. This paper takes Wuxi City as a representative sample location and establishes the digital analysis platform using urban price and several high-precision big data acquisition methods. On this basis, it analyzes the coupling relationship between different public service categories and land price, summarizing the coupling patterns of urban public facilities distribution and urban land price fluctuations. Finally, the internal mechanism within each of the two elements is explored, providing the reference of the optimum layout of urban planning and public service facilities.Keywords: public service facilities, land price, urban spatial morphology, big data
Procedia PDF Downloads 21523214 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization
Authors: Subhajit Das, Nirjhar Dhang
Abstract:
Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization
Procedia PDF Downloads 21523213 Deployed Confidence: The Testing in Production
Authors: Shreya Asthana
Abstract:
Testers know that the feature they tested on stage is working perfectly in production only after release went live. Sometimes something breaks in production and testers get to know through the end user’s bug raised. The panic mode starts when your staging test results do not reflect current production behavior. And you started doubting your testing skills when finally the user reported a bug to you. Testers can deploy their confidence on release day by testing on production. Once you start doing testing in production, you will see test result accuracy because it will be running on real time data and execution will be a little faster as compared to staging one due to elimination of bad data. Feature flagging, canary releases, and data cleanup can help to achieve this technique of testing. By this paper it will be easier to understand the steps to achieve production testing before making your feature live, and to modify IT company’s testing procedure, so testers can provide the bug free experience to the end users. This study is beneficial because too many people think that testing should be done in staging but not in production and now this is high time to pull out people from their old mindset of testing into a new testing world. At the end of the day, it all just matters if the features are working in production or not.Keywords: bug free production, new testing mindset, testing strategy, testing approach
Procedia PDF Downloads 7723212 Cross-Validation of the Data Obtained for ω-6 Linoleic and ω-3 α-Linolenic Acids Concentration of Hemp Oil Using Jackknife and Bootstrap Resampling
Authors: Vibha Devi, Shabina Khanam
Abstract:
Hemp (Cannabis sativa) possesses a rich content of ω-6 linoleic and ω-3 linolenic essential fatty acid in the ratio of 3:1, which is a rare and most desired ratio that enhances the quality of hemp oil. These components are beneficial for the development of cell and body growth, strengthen the immune system, possess anti-inflammatory action, lowering the risk of heart problem owing to its anti-clotting property and a remedy for arthritis and various disorders. The present study employs supercritical fluid extraction (SFE) approach on hemp seed at various conditions of parameters; temperature (40 - 80) °C, pressure (200 - 350) bar, flow rate (5 - 15) g/min, particle size (0.430 - 1.015) mm and amount of co-solvent (0 - 10) % of solvent flow rate through central composite design (CCD). CCD suggested 32 sets of experiments, which was carried out. As SFE process includes large number of variables, the present study recommends the application of resampling techniques for cross-validation of the obtained data. Cross-validation refits the model on each data to achieve the information regarding the error, variability, deviation etc. Bootstrap and jackknife are the most popular resampling techniques, which create a large number of data through resampling from the original dataset and analyze these data to check the validity of the obtained data. Jackknife resampling is based on the eliminating one observation from the original sample of size N without replacement. For jackknife resampling, the sample size is 31 (eliminating one observation), which is repeated by 32 times. Bootstrap is the frequently used statistical approach for estimating the sampling distribution of an estimator by resampling with replacement from the original sample. For bootstrap resampling, the sample size is 32, which was repeated by 100 times. Estimands for these resampling techniques are considered as mean, standard deviation, variation coefficient and standard error of the mean. For ω-6 linoleic acid concentration, mean value was approx. 58.5 for both resampling methods, which is the average (central value) of the sample mean of all data points. Similarly, for ω-3 linoleic acid concentration, mean was observed as 22.5 through both resampling. Variance exhibits the spread out of the data from its mean. Greater value of variance exhibits the large range of output data, which is 18 for ω-6 linoleic acid (ranging from 48.85 to 63.66 %) and 6 for ω-3 linoleic acid (ranging from 16.71 to 26.2 %). Further, low value of standard deviation (approx. 1 %), low standard error of the mean (< 0.8) and low variance coefficient (< 0.2) reflect the accuracy of the sample for prediction. All the estimator value of variance coefficients, standard deviation and standard error of the mean are found within the 95 % of confidence interval.Keywords: resampling, supercritical fluid extraction, hemp oil, cross-validation
Procedia PDF Downloads 14123211 Evaluation of Hydrocarbon Prospects of 'ADE' Field, Niger Delta
Authors: Oluseun A. Sanuade, Sanlinn I. Kaka, Adesoji O. Akanji, Olukole A. Akinbiyi
Abstract:
Prospect evaluation of ‘the ‘ADE’ field was done using 3D seismic data and well log data. The field is located in the offshore Niger Delta where water depth ranges from 450 to 800 m. The objectives of this study are to explore deeper prospects and to ascertain the kind of traps that are favorable for the accumulation of hydrocarbon in the field. Six horizons with major and minor faults were identified and mapped in the field. Time structure maps of these horizons were generated and using the available check-shot data the maps were converted to top structure maps which were used to calculate the hydrocarbon volume. The results show that regional structural highs that are trending in northeast-southwest (NE-SW) characterized a large portion of the field. These highs were observed across all horizons revealing a regional post-depositional deformation. Three prospects were identified and evaluated to understand the different opportunities in the field. These include stratigraphic pinch out and bi-directional downlap. The results of this study show that the field has potentials for new opportunities that could be explored for further studies.Keywords: hydrocarbon, play, prospect, stratigraphy
Procedia PDF Downloads 27023210 Enhancing Information Technologies with AI: Unlocking Efficiency, Scalability, and Innovation
Authors: Abdal-Hafeez Alhussein
Abstract:
Artificial Intelligence (AI) has become a transformative force in the field of information technologies, reshaping how data is processed, analyzed, and utilized across various domains. This paper explores the multifaceted applications of AI within information technology, focusing on three key areas: automation, scalability, and data-driven decision-making. We delve into how AI-powered automation is optimizing operational efficiency in IT infrastructures, from automated network management to self-healing systems that reduce downtime and enhance performance. Scalability, another critical aspect, is addressed through AI’s role in cloud computing and distributed systems, enabling the seamless handling of increasing data loads and user demands. Additionally, the paper highlights the use of AI in cybersecurity, where real-time threat detection and adaptive response mechanisms significantly improve resilience against sophisticated cyberattacks. In the realm of data analytics, AI models—especially machine learning and natural language processing—are driving innovation by enabling more precise predictions, automated insights extraction, and enhanced user experiences. The paper concludes with a discussion on the ethical implications of AI in information technologies, underscoring the importance of transparency, fairness, and responsible AI use. It also offers insights into future trends, emphasizing the potential of AI to further revolutionize the IT landscape by integrating with emerging technologies like quantum computing and IoT.Keywords: artificial intelligence, information technology, automation, scalability
Procedia PDF Downloads 1723209 D3Advert: Data-Driven Decision Making for Ad Personalization through Personality Analysis Using BiLSTM Network
Authors: Sandesh Achar
Abstract:
Personalized advertising holds greater potential for higher conversion rates compared to generic advertisements. However, its widespread application in the retail industry faces challenges due to complex implementation processes. These complexities impede the swift adoption of personalized advertisement on a large scale. Personalized advertisement, being a data-driven approach, necessitates consumer-related data, adding to its complexity. This paper introduces an innovative data-driven decision-making framework, D3Advert, which personalizes advertisements by analyzing personalities using a BiLSTM network. The framework utilizes the Myers–Briggs Type Indicator (MBTI) dataset for development. The employed BiLSTM network, specifically designed and optimized for D3Advert, classifies user personalities into one of the sixteen MBTI categories based on their social media posts. The classification accuracy is 86.42%, with precision, recall, and F1-Score values of 85.11%, 84.14%, and 83.89%, respectively. The D3Advert framework personalizes advertisements based on these personality classifications. Experimental implementation and performance analysis of D3Advert demonstrate a 40% improvement in impressions. D3Advert’s innovative and straightforward approach has the potential to transform personalized advertising and foster widespread personalized advertisement adoption in marketing.Keywords: personalized advertisement, deep Learning, MBTI dataset, BiLSTM network, NLP.
Procedia PDF Downloads 4423208 Communication Infrastructure Required for a Driver Behaviour Monitoring System, ‘SiaMOTO’ IT Platform
Authors: Dogaru-Ulieru Valentin, Sălișteanu Ioan Corneliu, Ardeleanu Mihăiță Nicolae, Broscăreanu Ștefan, Sălișteanu Bogdan, Mihai Mihail
Abstract:
The SiaMOTO system is a communications and data processing platform for vehicle traffic. The human factor is the most important factor in the generation of this data, as the driver is the one who dictates the trajectory of the vehicle. Like any trajectory, specific parameters refer to position, speed and acceleration. Constant knowledge of these parameters allows complex analyses. Roadways allow many vehicles to travel through their confined space, and the overlapping trajectories of several vehicles increase the likelihood of collision events, known as road accidents. Any such event has causes that lead to its occurrence, so the conditions for its occurrence are known. The human factor is predominant in deciding the trajectory parameters of the vehicle on the road, so monitoring it by knowing the events reported by the DiaMOTO device over time, will generate a guide to target any potentially high-risk driving behavior and reward those who control the driving phenomenon well. In this paper, we have focused on detailing the communication infrastructure of the DiaMOTO device with the traffic data collection server, the infrastructure through which the database that will be used for complex AI/DLM analysis is built. The central element of this description is the data string in CODEC-8 format sent by the DiaMOTO device to the SiaMOTO collection server database. The data presented are specific to a functional infrastructure implemented in an experimental model stage, by installing on a number of 50 vehicles DiaMOTO unique code devices, integrating ADAS and GPS functions, through which vehicle trajectories can be monitored 24 hours a day.Keywords: DiaMOTO, Codec-8, ADAS, GPS, driver monitoring
Procedia PDF Downloads 7823207 Predictive Modeling of Bridge Conditions Using Random Forest
Authors: Miral Selim, May Haggag, Ibrahim Abotaleb
Abstract:
The aging of transportation infrastructure presents significant challenges, particularly concerning the monitoring and maintenance of bridges. This study investigates the application of Random Forest algorithms for predictive modeling of bridge conditions, utilizing data from the US National Bridge Inventory (NBI). The research is significant as it aims to improve bridge management through data-driven insights that can enhance maintenance strategies and contribute to overall safety. Random Forest is chosen for its robustness, ability to handle complex, non-linear relationships among variables, and its effectiveness in feature importance evaluation. The study begins with comprehensive data collection and cleaning, followed by the identification of key variables influencing bridge condition ratings, including age, construction materials, environmental factors, and maintenance history. Random Forest is utilized to examine the relationships between these variables and the predicted bridge conditions. The dataset is divided into training and testing subsets to evaluate the model's performance. The findings demonstrate that the Random Forest model effectively enhances the understanding of factors affecting bridge conditions. By identifying bridges at greater risk of deterioration, the model facilitates proactive maintenance strategies, which can help avoid costly repairs and minimize service disruptions. Additionally, this research underscores the value of data-driven decision-making, enabling better resource allocation to prioritize maintenance efforts where they are most necessary. In summary, this study highlights the efficiency and applicability of Random Forest in predictive modeling for bridge management. Ultimately, these findings pave the way for more resilient and proactive management of bridge systems, ensuring their longevity and reliability for future use.Keywords: data analysis, random forest, predictive modeling, bridge management
Procedia PDF Downloads 2223206 Validity and Reliability of Competency Assessment Implementation (CAI) Instrument Using Rasch Model
Authors: Nurfirdawati Muhamad Hanafi, Azmanirah Ab Rahman, Marina Ibrahim Mukhtar, Jamil Ahmad, Sarebah Warman
Abstract:
This study was conducted to generate empirical evidence on validity and reliability of the item of Competency Assessment Implementation (CAI) Instrument using Rasch Model for polythomous data aided by Winstep software version 3.68. The construct validity was examined by analyzing the point-measure correlation index (PTMEA), in fit and outfit MNSQ values; meanwhile the reliability was examined by analyzing item reliability index. A survey technique was used as the major method with the CAI instrument on 156 teachers from vocational schools. The results have shown that the reliability of CAI Instrument items were between 0.80 and 0.98. PTMEA Correlation is in positive values, in which the item is able to distinguish between the ability of the respondent. Statistical data obtained shows that out of 154 items, 12 items from the instrument suggested to be omitted. This study is hoped could bring a new direction to the process of data analysis in educational research.Keywords: competency assessment, reliability, validity, item analysis
Procedia PDF Downloads 44523205 Electronic Equipment Failure due to Corrosion
Authors: Yousaf Tariq
Abstract:
There are many reasons which are involved in electronic equipment failure i.e. temperature, humidity, dust, smoke etc. Corrosive gases are also one of the factor which may involve in failure of equipment. Sensitivity of electronic equipment increased when “lead-free” regulation enforced on manufacturers. In data center, equipment like hard disk, servers, printed circuit boards etc. have been exposed to gaseous contamination due to increase in sensitivity. There is a worldwide standard to protect electronic industrial electronic from corrosive gases. It is well known as “ANSI/ISA S71.04 – 1985 - Environmental Conditions for Control Systems: Airborne Contaminants. ASHRAE Technical Committee (TC) 9.9 members also recommended ISA standard in their whitepaper on Gaseous and Particulate Contamination Guideline for data centers. TC 9.9 members represented some of the major IT equipment manufacturers e.g. IBM, HP, Cisco etc. As per standard practices, first step is to monitor air quality in data center. If contamination level shows more than G1, it means that gas-phase air filtration is required other than dust/smoke air filtration. It is important that outside fresh air entering in data center should have pressurization/re-circulated process in order to absorb corrosive gases and to maintain level within specified limit. It is also important that air quality monitoring should be conducted once in a year. Temperature and humidity should also be monitored as per standard practices to maintain level within specified limit.Keywords: corrosive gases, corrosion, electronic equipment failure, ASHRAE, hard disk
Procedia PDF Downloads 33023204 Evaluation of Diagnosis Performance Based on Pairwise Model Construction and Filtered Data
Authors: Hyun-Woo Cho
Abstract:
It is quite important to utilize right time and intelligent production monitoring and diagnosis of industrial processes in terms of quality and safety issues. When compared with monitoring task, fault diagnosis represents the task of finding process variables responsible causing a specific fault in the process. It can be helpful to process operators who should investigate and eliminate root causes more effectively and efficiently. This work focused on the active use of combining a nonlinear statistical technique with a preprocessing method in order to implement practical real-time fault identification schemes for data-rich cases. To compare its performance to existing identification schemes, a case study on a benchmark process was performed in several scenarios. The results showed that the proposed fault identification scheme produced more reliable diagnosis results than linear methods. In addition, the use of the filtering step improved the identification results for the complicated processes with massive data sets.Keywords: diagnosis, filtering, nonlinear statistical techniques, process monitoring
Procedia PDF Downloads 24423203 A Predictive Model of Supply and Demand in the State of Jalisco, Mexico
Authors: M. Gil, R. Montalvo
Abstract:
Business Intelligence (BI) has become a major source of competitive advantages for firms around the world. BI has been defined as the process of data visualization and reporting for understanding what happened and what is happening. Moreover, BI has been studied for its predictive capabilities in the context of trade and financial transactions. The current literature has identified that BI permits managers to identify market trends, understand customer relations, and predict demand for their products and services. This last capability of BI has been of special concern to academics. Specifically, due to its power to build predictive models adaptable to specific time horizons and geographical regions. However, the current literature of BI focuses on predicting specific markets and industries because the impact of such predictive models was relevant to specific industries or organizations. Currently, the existing literature has not developed a predictive model of BI that takes into consideration the whole economy of a geographical area. This paper seeks to create a predictive model of BI that would show the bigger picture of a geographical area. This paper uses a data set from the Secretary of Economic Development of the state of Jalisco, Mexico. Such data set includes data from all the commercial transactions that occurred in the state in the last years. By analyzing such data set, it will be possible to generate a BI model that predicts supply and demand from specific industries around the state of Jalisco. This research has at least three contributions. Firstly, a methodological contribution to the BI literature by generating the predictive supply and demand model. Secondly, a theoretical contribution to BI current understanding. The model presented in this paper incorporates the whole picture of the economic field instead of focusing on a specific industry. Lastly, a practical contribution might be relevant to local governments that seek to improve their economic performance by implementing BI in their policy planning.Keywords: business intelligence, predictive model, supply and demand, Mexico
Procedia PDF Downloads 12323202 A New Block Cipher for Resource-Constrained Internet of Things Devices
Authors: Muhammad Rana, Quazi Mamun, Rafiqul Islam
Abstract:
In the Internet of Things (IoT), many devices are connected and accumulate a sheer amount of data. These Internet-driven raw data need to be transferred securely to the end-users via dependable networks. Consequently, the challenges of IoT security in various IoT domains are paramount. Cryptography is being applied to secure the networks for authentication, confidentiality, data integrity and access control. However, due to the resource constraint properties of IoT devices, the conventional cipher may not be suitable in all IoT networks. This paper designs a robust and effective lightweight cipher to secure the IoT environment and meet the resource-constrained nature of IoT devices. We also propose a symmetric and block-cipher based lightweight cryptographic algorithm. The proposed algorithm increases the complexity of the block cipher, maintaining the lowest computational requirements possible. The proposed algorithm efficiently constructs the key register updating technique, reduces the number of encryption rounds, and adds a new layer between the encryption and decryption processes.Keywords: internet of things, cryptography block cipher, S-box, key management, security, network
Procedia PDF Downloads 11323201 BodeACD: Buffer Overflow Vulnerabilities Detecting Based on Abstract Syntax Tree, Control Flow Graph, and Data Dependency Graph
Authors: Xinghang Lv, Tao Peng, Jia Chen, Junping Liu, Xinrong Hu, Ruhan He, Minghua Jiang, Wenli Cao
Abstract:
As one of the most dangerous vulnerabilities, effective detection of buffer overflow vulnerabilities is extremely necessary. Traditional detection methods are not accurate enough and consume more resources to meet complex and enormous code environment at present. In order to resolve the above problems, we propose the method for Buffer overflow detection based on Abstract syntax tree, Control flow graph, and Data dependency graph (BodeACD) in C/C++ programs with source code. Firstly, BodeACD constructs the function samples of buffer overflow that are available on Github, then represents them as code representation sequences, which fuse control flow, data dependency, and syntax structure of source code to reduce information loss during code representation. Finally, BodeACD learns vulnerability patterns for vulnerability detection through deep learning. The results of the experiments show that BodeACD has increased the precision and recall by 6.3% and 8.5% respectively compared with the latest methods, which can effectively improve vulnerability detection and reduce False-positive rate and False-negative rate.Keywords: vulnerability detection, abstract syntax tree, control flow graph, data dependency graph, code representation, deep learning
Procedia PDF Downloads 17023200 Analysis of Financial Time Series by Using Ornstein-Uhlenbeck Type Models
Authors: Md Al Masum Bhuiyan, Maria C. Mariani, Osei K. Tweneboah
Abstract:
In the present work, we develop a technique for estimating the volatility of financial time series by using stochastic differential equation. Taking the daily closing prices from developed and emergent stock markets as the basis, we argue that the incorporation of stochastic volatility into the time-varying parameter estimation significantly improves the forecasting performance via Maximum Likelihood Estimation. While using the technique, we see the long-memory behavior of data sets and one-step-ahead-predicted log-volatility with ±2 standard errors despite the variation of the observed noise from a Normal mixture distribution, because the financial data studied is not fully Gaussian. Also, the Ornstein-Uhlenbeck process followed in this work simulates well the financial time series, which aligns our estimation algorithm with large data sets due to the fact that this algorithm has good convergence properties.Keywords: financial time series, maximum likelihood estimation, Ornstein-Uhlenbeck type models, stochastic volatility model
Procedia PDF Downloads 24223199 Detecting Black Hole Attacks in Body Sensor Networks
Authors: Sara Alshehri, Bayan Alenzi, Atheer Alshehri, Samia Chelloug, Zainab Almry, Hussah Albugmai
Abstract:
This paper concerns body area networks sensor that collect signals around a human body. The black hole attacks are the main security challenging problem because the data traffic can be dropped at any node. The focus of our proposed solution is to efficiently route data packets while detecting black hole nodes.Keywords: body sensor networks, security, black hole, routing, broadcasting, OMNeT++
Procedia PDF Downloads 64523198 Data Analytics of Electronic Medical Records Shows an Age-Related Differences in Diagnosis of Coronary Artery Disease
Authors: Maryam Panahiazar, Andrew M. Bishara, Yorick Chern, Roohallah Alizadehsani, Dexter Hadleye, Ramin E. Beygui
Abstract:
Early detection plays a crucial role in enhancing the outcome for a patient with coronary artery disease (CAD). We utilized a big data analytics platform on ~23,000 patients with CAD from a total of 960,129 UCSF patients in 8 years. We traced the patients from their first encounter with a physician to diagnose and treat CAD. Characteristics such as demographic information, comorbidities, vital, lab tests, medications, and procedures are included. There are statistically significant gender-based differences in patients younger than 60 years old from the time of the first physician encounter to coronary artery bypass grafting (CABG) with a p-value=0.03. There are no significant differences between the patients between 60 and 80 years old (p-value=0.8) and older than 80 (p-value=0.4) with a 95% confidence interval. This recognition would affect significant changes in the guideline for referral of the patients for diagnostic tests expeditiously to improve the outcome by avoiding the delay in treatment.Keywords: electronic medical records, coronary artery disease, data analytics, young women
Procedia PDF Downloads 14823197 Semi-Automatic Method to Assist Expert for Association Rules Validation
Authors: Amdouni Hamida, Gammoudi Mohamed Mohsen
Abstract:
In order to help the expert to validate association rules extracted from data, some quality measures are proposed in the literature. We distinguish two categories: objective and subjective measures. The first one depends on a fixed threshold and on data quality from which the rules are extracted. The second one consists on providing to the expert some tools in the objective to explore and visualize rules during the evaluation step. However, the number of extracted rules to validate remains high. Thus, the manually mining rules task is very hard. To solve this problem, we propose, in this paper, a semi-automatic method to assist the expert during the association rule's validation. Our method uses rule-based classification as follow: (i) We transform association rules into classification rules (classifiers), (ii) We use the generated classifiers for data classification. (iii) We visualize association rules with their quality classification to give an idea to the expert and to assist him during validation process.Keywords: association rules, rule-based classification, classification quality, validation
Procedia PDF Downloads 43923196 Studying the Effectiveness of Using Narrative Animation on Students’ Understanding of Complex Scientific Concepts
Authors: Atoum Abdullah
Abstract:
The purpose of this research is to determine the extent to which computer animation and narration affect students’ understanding of complex scientific concepts and improve their exam performance, this is compared to traditional lectures that include PowerPoints with texts and static images. A mixed-method design in data collection was used, including quantitative and qualitative data. Quantitative data was collected using a pre and post-test method and a close-ended questionnaire. Qualitative data was collected through an open-ended questionnaire. A pre and posttest strategy was used to measure the level of students’ understanding with and without the use of animation. The test included multiple-choice questions to test factual knowledge, open-ended questions to test conceptual knowledge, and to label the diagram questions to test application knowledge. The results showed that students on average, performed significantly higher on the posttest as compared to the pretest on all areas of acquired knowledge. However, the increase in the posttest score with respect to the acquisition of conceptual and application knowledge was higher compared to the increase in the posttest score with respect to the acquisition of factual knowledge. This result demonstrates that animation is more beneficial when acquiring deeper, conceptual, and cognitive knowledge than when only factual knowledge is acquired.Keywords: animation, narration, science, teaching
Procedia PDF Downloads 17023195 Asymmetrical Informative Estimation for Macroeconomic Model: Special Case in the Tourism Sector of Thailand
Authors: Chukiat Chaiboonsri, Satawat Wannapan
Abstract:
This paper used an asymmetric informative concept to apply in the macroeconomic model estimation of the tourism sector in Thailand. The variables used to statistically analyze are Thailand international and domestic tourism revenues, the expenditures of foreign and domestic tourists, service investments by private sectors, service investments by the government of Thailand, Thailand service imports and exports, and net service income transfers. All of data is a time-series index which was observed between 2002 and 2015. Empirically, the tourism multiplier and accelerator were estimated by two statistical approaches. The first was the result of the Generalized Method of Moments model (GMM) based on the assumption which the tourism market in Thailand had perfect information (Symmetrical data). The second was the result of the Maximum Entropy Bootstrapping approach (MEboot) based on the process that attempted to deal with imperfect information and reduced uncertainty in data observations (Asymmetrical data). In addition, the tourism leakages were investigated by a simple model based on the injections and leakages concept. The empirical findings represented the parameters computed from the MEboot approach which is different from the GMM method. However, both of the MEboot estimation and GMM model suggests that Thailand’s tourism sectors are in a period capable of stimulating the economy.Keywords: TThailand tourism, Maximum Entropy Bootstrapping approach, macroeconomic model, asymmetric information
Procedia PDF Downloads 29523194 Qualitative Study of Pre-Service Teachers' Imagined Professional World vs. Real Experiences of In-Service Teachers
Authors: Masood Monjezi
Abstract:
The English teachers’ pedagogical identity construction is the way teachers go through the process of becoming teachers and how they maintain their teaching selves. The pedagogical identity of teachers is influenced by several factors within the individual and the society. The purpose of this study was to compare the imagined social world of the pre-service teachers with the real experiences the in-service teachers had in the context of Iran to see how prepared the pre-service teachers are with a view to their identity being. This study used a qualitative approach to collection and analysis of the data. Structured and semi-structured interviews, focus groups and process logs were used to collect the data. Then, using open coding, the data were analyzed. The findings showed that the imagined world of the pre-service teachers partly corresponded with the real world experiences of the in-service teachers leaving the pre-service teachers unprepared for their real world teaching profession. The findings suggest that the current approaches to English teacher training are in need of modification to better prepare the pre-service teachers for the future that expects them.Keywords: imagined professional world, in-service teachers, pre-service teachers, real experiences, community of practice, identity
Procedia PDF Downloads 33623193 Proposing an Optimal Pattern for Evaluating the Performance of the Staff Management of the Water and Sewage Organization in Western Azerbaijan Province, Iran
Authors: Tohid Eskandarzadeh, Nader Bahlouli, Turaj Behnam, Azra Jafarzadeh
Abstract:
The purpose of the study reported in this paper was to propose an optimal pattern to evaluate the staff management performance of the water and sewage organization. The performance prism-model was used to evaluate the following significant dimensions of performance: organizational strategies, organizational processes, organization capabilities, stakeholders’ partnership and satisfaction. In the present study, a standard, valid and reliable questionnaire was used to obtain data about the five dimensions of the performance prism model. 169 sample respondents were used for responding the questionnaire who were selected from the staff of water and waste-water organization in western Azerbaijan, Iran. Also, Alpha coefficient was used to check the reliability of the data-collection instrument which was measured to be beyond 0.7. The obtained data were statistically analyzed by means of SPSS version 18. The results obtained from the data analysis indicated that the performance of the staff management of the water and waste-water organization in western Azerbaijan was acceptable in terms of organizational strategies, organizational process, stakeholders’ partnership and satisfaction. Nevertheless, it was found that the performance of the staff management with respect to organizational abilities was average. Indeed, the researchers drew the conclusion that the current performance of the staff management in this organization in western Azerbaijan was less than ideal performance.Keywords: performance evaluation, performance prism model, water, waste-water organization
Procedia PDF Downloads 32823192 The Nutrient Foramen of the Scaphoid Bone – A Morphological Study
Authors: B. V. Murlimanju, P. J. Jiji, Latha V. Prabhu, Mangala M. Pai
Abstract:
Background: The scaphoid is the most commonly fractured bone of the wrist. The fracture may disrupt the vessels and end up as the avascular necrosis of the bone. The objective of the present study was to investigate the morphology and number of the nutrient foramina in the cadaveric dried scaphoid bones of the Indian population. Methods: The present study included 46 scaphoid bones (26 right sided and 20 left sided) which were obtained from the gross anatomy laboratory of our institution. The bones were macroscopically observed for the nutrient foramina and the data was collected with respect to their number. The tabulation of the data and analysis were done. Results: All of our specimens (100%) exhibited the nutrient foramina over the non-articular surfaces. The foramina were observed only over the palmar and dorsal surfaces of the scaphoid bones. The foramina were observed both proximal and distal to the mid waist of the scaphoid bone. The foramen ranged between 9 and 54 in each scaphoid bone. The foramina over the palmar surface ranged between, 2-24 in number. The foramina over the dorsal surface ranged between, 7-36 in number. The foramina proximal to the waist ranged between 2 and 24 in number and distal to the waist ranged between 3 and 39. Conclusion: We believe that the present study has provided additional data about the nutrient foramina of the scaphoid bones. The data is enlightening to the orthopedic surgeon and would help in the hand surgeries. The morphological knowledge of the vasculature, their foramina of entry and their number is required to understand the concepts in the avascular necrosis of the proximal scaphoid and non-union of the fracture at the waist of the scaphoid.Keywords: avascular necrosis, nutrient, scaphoid, vascular
Procedia PDF Downloads 34423191 Overview of Development of a Digital Platform for Building Critical Infrastructure Protection Systems in Smart Industries
Authors: Bruno Vilić Belina, Ivan Župan
Abstract:
Smart industry concepts and digital transformation are very popular in many industries. They develop their own digital platforms, which have an important role in innovations and transactions. The main idea of smart industry digital platforms is central data collection, industrial data integration, and data usage for smart applications and services. This paper presents the development of a digital platform for building critical infrastructure protection systems in smart industries. Different service contraction modalities in service level agreements (SLAs), customer relationship management (CRM) relations, trends, and changes in business architectures (especially process business architecture) for the purpose of developing infrastructural production and distribution networks, information infrastructure meta-models and generic processes by critical infrastructure owner demanded by critical infrastructure law, satisfying cybersecurity requirements and taking into account hybrid threats are researched.Keywords: cybersecurity, critical infrastructure, smart industries, digital platform
Procedia PDF Downloads 10623190 Innovation in Traditional Game: A Case Study of Trainee Teachers' Learning Experiences
Authors: Malathi Balakrishnan, Cheng Lee Ooi, Chander Vengadasalam
Abstract:
The purpose of this study is to explore a case study of trainee teachers’ learning experience on innovating traditional games during the traditional game carnival. It explores issues arising from multiple case studies of trainee teachers learning experiences in innovating traditional games. A qualitative methodology was adopted through observations, semi-structured interviews and reflective journals’ content analysis of trainee teachers’ learning experiences creating and implementing innovative traditional games. Twelve groups of 36 trainee teachers who registered for Sports and Physical Education Management Course were the participants for this research during the traditional game carnival. Semi structured interviews were administrated after the trainee teachers learning experiences in creating innovative traditional games. Reflective journals were collected after carnival day and the content analyzed. Inductive data analysis was used to evaluate various data sources. All the collected data were then evaluated through the Nvivo data analysis process. Inductive reasoning was interpreted based on the Self Determination Theory (SDT). The findings showed that the trainee teachers had positive game participation experiences, game knowledge about traditional games and positive motivation to innovate the game. The data also revealed the influence of themes like cultural significance and creativity. It can be concluded from the findings that the organized game carnival, as a requirement of course work by the Institute of Teacher Training Malaysia, was able to enhance teacher trainers’ innovative thinking skills. The SDT, as a multidimensional approach to motivation, was utilized. Therefore, teacher trainers may have more learning experiences using the SDT.Keywords: learning experiences, innovation, traditional games, trainee teachers
Procedia PDF Downloads 33023189 Trend Analysis for Extreme Rainfall Events in New South Wales, Australia
Authors: Evan Hajani, Ataur Rahman, Khaled Haddad
Abstract:
Climate change will affect the hydrological cycle in many different ways such as increase in evaporation and rainfalls. There have been growing interests among researchers to identify the nature of trends in historical rainfall data in many different parts of the world. This paper examines the trends in annual maximum rainfall data from 30 stations in New South Wales, Australia by using two non-parametric tests, Mann-Kendall (MK) and Spearman’s Rho (SR). Rainfall data were analyzed for fifteen different durations ranging from 6 min to 3 days. It is found that the sub-hourly durations (6, 12, 18, 24, 30, and 48 minutes) show statistically significant positive (upward) trends whereas longer duration (sub-daily and daily) events generally show a statistically significant negative (downward) trend. It is also found that the MK test and SR test provide notably different results for some rainfall event durations considered in this study. Since shorter duration sub-hourly rainfall events show positive trends at many stations, the design rainfall data based on stationary frequency analysis for these durations need to be adjusted to account for the impact of climate change. These shorter durations are more relevant to many urban development projects based on smaller catchments having a much shorter response time.Keywords: climate change, Mann-Kendall test, Spearman’s Rho test, trends, design rainfall
Procedia PDF Downloads 271