Search results for: export trade data
19171 A Bayesian Multivariate Microeconometric Model for Estimation of Price Elasticity of Demand
Authors: Jefferson Hernandez, Juan Padilla
Abstract:
Estimation of price elasticity of demand is a valuable tool for the task of price settling. Given its relevance, it is an active field for microeconomic and statistical research. Price elasticity in the industry of oil and gas, in particular for fuels sold in gas stations, has shown to be a challenging topic given the market and state restrictions, and underlying correlations structures between the types of fuels sold by the same gas station. This paper explores the Lotka-Volterra model for the problem for price elasticity estimation in the context of fuels; in addition, it is introduced multivariate random effects with the purpose of dealing with errors, e.g., measurement or missing data errors. In order to model the underlying correlation structures, the Inverse-Wishart, Hierarchical Half-t and LKJ distributions are studied. Here, the Bayesian paradigm through Markov Chain Monte Carlo (MCMC) algorithms for model estimation is considered. Simulation studies covering a wide range of situations were performed in order to evaluate parameter recovery for the proposed models and algorithms. Results revealed that the proposed algorithms recovered quite well all model parameters. Also, a real data set analysis was performed in order to illustrate the proposed approach.Keywords: price elasticity, volume, correlation structures, Bayesian models
Procedia PDF Downloads 16319170 Applications Using Geographic Information System for Planning and Development of Energy Efficient and Sustainable Living for Smart-Cities
Authors: Javed Mohammed
Abstract:
As urbanization process has been and will be happening in an unprecedented scale worldwide, strong requirements from academic research and practical fields for smart management and intelligent planning of cities are pressing to handle increasing demands of infrastructure and potential risks of inhabitants agglomeration in disaster management. Geo-spatial data and Geographic Information System (GIS) are essential components for building smart cities in a basic way that maps the physical world into virtual environment as a referencing framework. On higher level, GIS has been becoming very important in smart cities on different sectors. In the digital city era, digital maps and geospatial databases have long been integrated in workflows in land management, urban planning and transportation in government. People have anticipated GIS to be more powerful not only as an archival and data management tool but also as spatial models for supporting decision-making in intelligent cities. The purpose of this project is to offer observations and analysis based on a detailed discussion of Geographic Information Systems( GIS) driven Framework towards the development of Smart and Sustainable Cities through high penetration of Renewable Energy Technologies.Keywords: digital maps, geo-spatial, geographic information system, smart cities, renewable energy, urban planning
Procedia PDF Downloads 52519169 Evaluation of the Self-Efficacy and Learning Experiences of Final year Students of Computer Science of Southwest Nigerian Universities
Authors: Olabamiji J. Onifade, Peter O. Ajayi, Paul O. Jegede
Abstract:
This study aimed at investigating the preparedness of the undergraduate final year students of Computer Science as the next entrants into the workplace. It assessed their self-efficacy in computational tasks and examined the relationship between their self-efficacy and their learning experiences in Southwest Nigerian universities. The study employed a descriptive survey research design. The population of the study comprises all the final year students of Computer Science. A purposive sampling technique was adopted in selecting a representative sample of interest from the final year students of Computer Science. The Students’ Computational Task Self-Efficacy Questionnaire (SCTSEQ) was used to collect data. Mean, standard deviation, frequency, percentages, and linear regression were used for data analysis. The result obtained revealed that the final year students of Computer Science were averagely confident in performing computational tasks, and there is a significant relationship between the learning experiences of the students and their self-efficacy. The study recommends that the curriculum be improved upon to accommodate industry experts as lecturers in some of the courses, make provision for more practical sessions, and the learning experiences of the student be considered an important component in the undergraduate Computer Science curriculum development process.Keywords: computer science, learning experiences, self-efficacy, students
Procedia PDF Downloads 14319168 A Comparative Study on Software Patent: The Meaning of 'Use' in Direct Infringement
Authors: Tien Wei Daniel Hwang
Abstract:
The computer program inventors, particularly in Fintech, are unwilling to apply for patents in Taiwan after 2014. Passing the ‘statutory subject matter eligibility’ test and becoming the system patent are not the only cause to the reduction in the number of application. Taiwanese court needs to resolve whether the defendants had ‘used’ that software patent in patent direct infringement suit. Both 35 U.S.C. § 271(a) and article 58 paragraph 2 of Taiwan Patent Law don’t define the meaning of ‘use’ in the statutes. Centillion Data Sys., LLC v. Qwest Commc’ns Int’l, Inc. reconsidered the meaning of ‘use’ in system patent infringement, and held that ‘a party must put the invention into service, i.e., control the system as a whole and obtain benefit from it.’ In Taiwan, Intellectual Property Office, Ministry of Economic Affairs, has explained that ‘using’ the patent is ‘achieving the technical effect of the patent.’ Nonetheless, this definition is too broad to apply to not only the software patent but also the traditional patent. To supply the friendly environment for Fintech corporations, this article aims to let Taiwanese court realize why and how United States District Court, S.D. Indiana, Indianapolis Division and United States Court of Appeals, Federal Circuit defined the meaning of ‘use’ in 35 U.S.C. § 271(a). However, this definition is so lax and confuses many defendants in United States. Accordingly, this article indicates the elements in Taiwan Patent Law are different with 35 U.S.C. § 271(a), so Taiwanese court can follow the interpretation of ‘use’ in Centillion Data case without the same obstacle.Keywords: direct infringement, FinTech, software patent, use
Procedia PDF Downloads 29919167 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 8019166 Improvement of Parallel Compressor Model in Dealing Outlet Unequal Pressure Distribution
Authors: Kewei Xu, Jens Friedrich, Kevin Dwinger, Wei Fan, Xijin Zhang
Abstract:
Parallel Compressor Model (PCM) is a simplified approach to predict compressor performance with inlet distortions. In PCM calculation, it is assumed that the sub-compressors’ outlet static pressure is uniform and therefore simplifies PCM calculation procedure. However, if the compressor’s outlet duct is not long and straight, such assumption frequently induces error ranging from 10% to 15%. This paper provides a revised calculation method of PCM that can correct the error. The revised method employs energy equation, momentum equation and continuity equation to acquire needed parameters and replace the equal static pressure assumption. Based on the revised method, PCM is applied on two compression system with different blades types. The predictions of their performance in non-uniform inlet conditions are yielded through the revised calculation method and are employed to evaluate the method’s efficiency. Validating the results by experimental data, it is found that although little deviation occurs, calculated result agrees well with experiment data whose error ranges from 0.1% to 3%. Therefore, this proves the revised calculation method of PCM possesses great advantages in predicting the performance of the distorted compressor with limited exhaust duct.Keywords: parallel compressor model (pcm), revised calculation method, inlet distortion, outlet unequal pressure distribution
Procedia PDF Downloads 32919165 Complementing Assessment Processes with Standardized Tests: A Work in Progress
Authors: Amparo Camacho
Abstract:
ABET accredited programs must assess the development of student learning outcomes (SOs) in engineering programs. Different institutions implement different strategies for this assessment, and they are usually designed “in house.” This paper presents a proposal for including standardized tests to complement the ABET assessment model in an engineering college made up of six distinct engineering programs. The engineering college formulated a model of quality assurance in education to be implemented throughout the six engineering programs to regularly assess and evaluate the achievement of SOs in each program offered. The model uses diverse techniques and sources of data to assess student performance and to implement actions of improvement based on the results of this assessment. The model is called “Assessment Process Model” and it includes SOs A through K, as defined by ABET. SOs can be divided into two categories: “hard skills” and “professional skills” (soft skills). The first includes abilities, such as: applying knowledge of mathematics, science, and engineering and designing and conducting experiments, as well as analyzing and interpreting data. The second category, “professional skills”, includes communicating effectively, and understanding professional and ethnical responsibility. Within the Assessment Process Model, various tools were used to assess SOs, related to both “hard” as well as “soft” skills. The assessment tools designed included: rubrics, surveys, questionnaires, and portfolios. In addition to these instruments, the Engineering College decided to use tools that systematically gather consistent quantitative data. For this reason, an in-house exam was designed and implemented, based on the curriculum of each program. Even though this exam was administered during various academic periods, it is not currently considered standardized. In 2017, the Engineering College included three standardized tests: one to assess mathematical and scientific reasoning and two more to assess reading and writing abilities. With these exams, the college hopes to obtain complementary information that can help better measure the development of both hard and soft skills of students in the different engineering programs. In the first semester of 2017, the three exams were given to three sample groups of students from the six different engineering programs. Students in the sample groups were either from the first, fifth, and tenth semester cohorts. At the time of submission of this paper, the engineering college has descriptive statistical data and is working with various statisticians to have a more in-depth and detailed analysis of the sample group of students’ achievement on the three exams. The overall objective of including standardized exams in the assessment model is to identify more precisely the least developed SOs in order to define and implement educational strategies necessary for students to achieve them in each engineering program.Keywords: assessment, hard skills, soft skills, standardized tests
Procedia PDF Downloads 28319164 Decode and Forward Cooperative Protocol Enhancement Using Interference Cancellation
Authors: Siddeeq Y. Ameen, Mohammed K. Yousif
Abstract:
Cooperative communication systems are considered to be a promising technology to improve the system capacity, reliability and performances over fading wireless channels. Cooperative relaying system with a single antenna will be able to reach the advantages of multiple antenna communication systems. It is ideally suitable for the distributed communication systems; the relays can cooperate and form virtual MIMO systems. Thus the paper will aim to investigate the possible enhancement of cooperated system using decode and forward protocol. On decode and forward an attempt to cancel or at least reduce the interference instead of increasing the SNR values is achieved. The latter can be achieved via the use group of relays depending on the channel status from source to relay and relay to destination respectively. In the proposed system, the transmission time has been divided into two phases to be used by decode and forward protocol. The first phase has been allocated for the source to transmit its data whereas the relays and destination nodes are in receiving mode. On the other hand, the second phase is allocated for the first and second groups of relay nodes to relay the data to the destination node. Simulations results have shown an improvement in performance is achieved compared to the conventional decode and forward in terms of BER and transmission rate.Keywords: cooperative systems, decode and forward, interference cancellation, virtual MIMO
Procedia PDF Downloads 32119163 Sensor Monitoring of the Concentrations of Different Gases Present in Synthesis of Ammonia Based on Multi-Scale Entropy and Multivariate Statistics
Authors: S. Aouabdi, M. Taibi
Abstract:
The supervision of chemical processes is the subject of increased development because of the increasing demands on reliability and safety. An important aspect of the safe operation of chemical process is the earlier detection of (process faults or other special events) and the location and removal of the factors causing such events, than is possible by conventional limit and trend checks. With the aid of process models, estimation and decision methods it is possible to also monitor hundreds of variables in a single operating unit, and these variables may be recorded hundreds or thousands of times per day. In the absence of appropriate processing method, only limited information can be extracted from these data. Hence, a tool is required that can project the high-dimensional process space into a low-dimensional space amenable to direct visualization, and that can also identify key variables and important features of the data. Our contribution based on powerful techniques for development of a new monitoring method based on multi-scale entropy MSE in order to characterize the behaviour of the concentrations of different gases present in synthesis and soft sensor based on PCA is applied to estimate these variables.Keywords: ammonia synthesis, concentrations of different gases, soft sensor, multi-scale entropy, multivarite statistics
Procedia PDF Downloads 33419162 Classification of Myoelectric Signals Using Multilayer Perceptron Neural Network with Back-Propagation Algorithm in a Wireless Surface Myoelectric Prosthesis of the Upper-Limb
Authors: Kevin D. Manalo, Jumelyn L. Torres, Noel B. Linsangan
Abstract:
This paper focuses on a wireless myoelectric prosthesis of the upper-limb that uses a Multilayer Perceptron Neural network with back propagation. The algorithm is widely used in pattern recognition. The network can be used to train signals and be able to use it in performing a function on their own based on sample inputs. The paper makes use of the Neural Network in classifying the electromyography signal that is produced by the muscle in the amputee’s skin surface. The gathered data will be passed on through the Classification Stage wirelessly through Zigbee Technology. The signal will be classified and trained to be used in performing the arm positions in the prosthesis. Through programming using Verilog and using a Field Programmable Gate Array (FPGA) with Zigbee, the EMG signals will be acquired and will be used for classification. The classified signal is used to produce the corresponding Hand Movements (Open, Pick, Hold, and Grip) through the Zigbee controller. The data will then be processed through the MLP Neural Network using MATLAB which then be used for the surface myoelectric prosthesis. Z-test will be used to display the output acquired from using the neural network.Keywords: field programmable gate array, multilayer perceptron neural network, verilog, zigbee
Procedia PDF Downloads 38719161 Mitigating Self-Regulation Issues in the Online Instruction of Math
Authors: Robert Vanderburg, Michael Cowling, Nicholas Gibson
Abstract:
Mathematics is one of the core subjects taught in the Australian K-12 education system and is considered an important component for future studies in areas such as engineering and technology. In addition to this, Australia has been a world leader in distance education due to the vastness of its geographic landscape. Despite this, research is still needed on distance math instruction. Even though delivery of curriculum has given way to online studies, and there is a resultant push for computer-based (PC, tablet, smartphone) math instruction, much instruction still involves practice problems similar to those original curriculum packs, without the ability for students to self-regulate their learning using the full interactive capabilities of these devices. Given this need, this paper addresses issues students have during online instruction. This study consists of 32 students struggling with mathematics enrolled in a math tutorial conducted in an online setting. The study used a case study design to understand some of the blockades hindering the students’ success. Data was collected by tracking students practice and quizzes, tracking engagement of the site, recording one-on-one tutorials, and collecting data from interviews with the students. Results revealed that when students have cognitively straining tasks in an online instructional setting, the first thing to dissipate was their ability to self-regulate. The results also revealed that instructors could ameliorate the situation and provided useful data on strategies that could be used for designing future online tasks. Specifically, instructors could utilize cognitive dissonance strategies to reduce the cognitive drain of the tasks online. They could segment the instruction process to reduce the cognitive demands of the tasks and provide in-depth self-regulatory training, freeing mental capacity for the mathematics content. Finally, instructors could provide specific scheduling and assignment structure changes to reduce the amount of student centered self-regulatory tasks in the class. These findings will be discussed in more detail and summarized in a framework that can be used for future work.Keywords: digital education, distance education, mathematics education, self-regulation
Procedia PDF Downloads 13419160 Correlation Analysis to Quantify Learning Outcomes for Different Teaching Pedagogies
Authors: Kanika Sood, Sijie Shang
Abstract:
A fundamental goal of education includes preparing students to become a part of the global workforce by making beneficial contributions to society. In this paper, we analyze student performance for multiple courses that involve different teaching pedagogies: a cooperative learning technique and an inquiry-based learning strategy. Student performance includes student engagement, grades, and attendance records. We perform this study in the Computer Science department for online and in-person courses for 450 students. We will perform correlation analysis to study the relationship between student scores and other parameters such as gender, mode of learning. We use natural language processing and machine learning to analyze student feedback data and performance data. We assess the learning outcomes of two teaching pedagogies for undergraduate and graduate courses to showcase the impact of pedagogical adoption and learning outcome as determinants of academic achievement. Early findings suggest that when using the specified pedagogies, students become experts on their topics and illustrate enhanced engagement with peers.Keywords: bag-of-words, cooperative learning, education, inquiry-based learning, in-person learning, natural language processing, online learning, sentiment analysis, teaching pedagogy
Procedia PDF Downloads 7519159 A Comparative Study of Wellness Among Sportsmen and Non Sportsmen
Authors: Jaskaran Singh Sidhu
Abstract:
Aim: The purpose of this study is to find the relationship between wellness among sportsmen and non sportsmen. Methodology: The present study is an experimental study for 80 senior secondary volleyball players of 16-19 years of age from Ludhiana District of Punjab (India), and 80 non-sportsperson were taken from senior secondary school of Ludhiana district. The sample for this study was taken through a random sampling technique. Tools: A five point scale havinf 50 items was used to acess the wellness Statistical Analysis: To find out the relationship among the variables exists or not, a t-test was used to test the significance of the difference between the means. Statistics for each characteristic were calculated; Mean, Standard deviation, Standard error of Mean. Data were analyzed using SPSS (statistical package for the social sciences). Statistical significance was set at p < 0.05. Results: Substantial deviations were noted at p<0.5 in the totality of wellness. Sportsmen show significant differences exist at p<0.5 in three parameters of wellness i.e., physical wellness, mental wellness, and social wellness. In spiritual and emotional wellness attributes, non-sportsmen shows significant difference at p<0.5. Conclusion: From the data interpretation it reflects that overall wellness can be improved by participation in sports. It further noted in study that participation in sports promote the attributes of wellness i.e., physical wellness, mental wellness, emotional wellness and social wellness.Keywords: physical, mental, social, emotional, wellness, spiritual
Procedia PDF Downloads 8819158 Analysis of Generation Z and Perceptions of Conscious Consumption in the Light of Primary Data
Authors: Mónika Garai-Fodor, Nikoett Huszak
Abstract:
In the present study, we investigate the cognitive aspects of conscious consumer behaviour among Generation Z members. In our view, conscious consumption can greatly help to foster social responsibility, environmental and health-conscious behaviour, and ethical consumerism. We believe that it is an important educational task to promote and reinforce consumer behaviour among young people that increases and creates community value. In this study, we analysed the dimensions of young people's conscious consumer behaviour and its manifestation in concrete forms of behaviour, purchasing, and consumer decisions. As a result of a survey conducted through a snowball sampling procedure, the responses of 200 respondents who are members of Generation Z were analysed. The research analysed young people's perceptions and opinions of conscious living and their perceptions of self-conscious consumer behaviour. The primary research used a pre-tested standardised online questionnaire. Data were evaluated using bivariate and multivariate analyses in addition to descriptive statistics. The research presents results that are valid for the sample, and we plan to continue with a larger sample survey and extend it to other generations. Our main objective is to analyse what conscious living means to young people, what behavioural elements they associate with it, and what activities they themselves undertake in this context.Keywords: generation Z, conscious consumption, primary research, sustainability
Procedia PDF Downloads 3719157 Effect of Collection Technique of Blood on Clinical Pathology
Authors: Marwa Elkalla, E. Ali Abdelfadil, Ali. Mohamed. M. Sami, Ali M. Abdel-Monem
Abstract:
To assess the impact of the blood collection technique on clinical pathology markers and to establish reference intervals, a study was performed using normal, healthy C57BL/6 mice. Both sexes were employed, and they were randomly assigned to different groups depending on the phlebotomy technique used. The blood was drawn in one of four ways: intracardiac (IC), caudal vena cava (VC), caudal vena cava (VC) plus a peritoneal collection of any extravasated blood, or retroorbital phlebotomy (RO). Several serum biochemistries, such as a liver function test, a complete blood count with differentials, and a platelet count, were analysed from the blood and serum samples analysed. Red blood cell count, haemoglobin (p >0.002), hematocrit, alkaline phosphatase, albumin, total protein, and creatinine were all significantly greater in female mice. Platelet counts, specific white blood cell numbers (total, neutrophil, lymphocyte, and eosinophil counts), globulin, amylase, and the BUN/creatinine ratio were all greater in males. The VC approach seemed marginally superior to the IC approach for the characteristics under consideration and was linked to the least variation among both sexes. Transaminase levels showed the greatest variation between study groups. The aspartate aminotransferase (AST) values were linked with decreased fluctuation for the VC approach, but the alanine aminotransferase (ALT) values were similar between the IC and VC groups. There was a lot of diversity and range in transaminase levels between the MC and RO groups. We found that the RO approach, the only one tested that allowed for repeated sample collection, yielded acceptable ALT readings. The findings show that the test results are significantly affected by the phlebotomy technique and that the VC or IC techniques provide the most reliable data. When organising a study and comparing data to reference ranges, the ranges supplied here by collection method and sex can be utilised to determine the best approach to data collection. The authors suggest establishing norms based on the procedures used by each individual researcher in his or her own lab.Keywords: clinical, pathology, blood, effect
Procedia PDF Downloads 9519156 Effects of a Nursing Intervention Program Using a Rehabilitation Self-Management Workbook on Depression, Motivation and Self-Efficacy of Rehabilitation Inpatients
Authors: Young Ae Song, So Yun Kim, Nan Ji Kim, So Young Jang, Yun Mee Park, Mi Jin Lee, Ji Yeon Lee
Abstract:
Background & Purpose: Many patients have psychological problems such as depression and anxiety during the rehabilitation period. Such psychological instability affects the prognosis of the patient in the long term. We develop a nursing intervention program for rehabilitation inpatients using a rehabilitation self –management note and evaluate the effects of the program on depression, motivation, and self-efficacy. Methods: The study was conducted using a nonequivalent control group non-synchronized design. Participants were rehabilitation inpatients, 27 patients in the control group and 20 in the experimental group. Questionnaires were completed three times (pretest, 5 days, 10 days) Final data for 40 patients were analyzed, 23 patients in the control group and 17 in the experimental group. Data were analyzed using x2-test, t-test, and repeated measure ANOVA. Results: Depression in the experimental group decreased compared to the control group, but it was not significant. The motivation for the experimental group changed significantly (F=3.90, p=.029) and self-efficacy increased, but not significantly (F=0.59, p=.559) Conclusion: Results of this study indicate that nursing intervention programs for rehabilitation inpatients could be useful to decrease depression and to improve motivation and self-efficacy.Keywords: depression, motivation, self-efficacy, rehabilitation inpatient, self-management workbook
Procedia PDF Downloads 14519155 Data Envelopment Analysis of Allocative Efficiency among Small-Scale Tuber Crop Farmers in North-Central, Nigeria
Authors: Akindele Ojo, Olanike Ojo, Agatha Oseghale
Abstract:
The empirical study examined the allocative efficiency of small holder tuber crop farmers in North central, Nigeria. Data used for the study were obtained from primary source using a multi-stage sampling technique with structured questionnaires administered to 300 randomly selected tuber crop farmers from the study area. Descriptive statistics, data envelopment analysis and Tobit regression model were used to analyze the data. The DEA result on the classification of the farmers into efficient and inefficient farmers showed that 17.67% of the sampled tuber crop farmers in the study area were operating at frontier and optimum level of production with mean allocative efficiency of 1.00. This shows that 82.33% of the farmers in the study area can still improve on their level of efficiency through better utilization of available resources, given the current state of technology. The results of the Tobit model for factors influencing allocative inefficiency in the study area showed that as the year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size increased in the study area, the allocative inefficiency of the farmers decreased. The results on effects of the significant determinants of allocative inefficiency at various distribution levels revealed that allocative efficiency increased from 22% to 34% as the farmer acquired more farming experience. The allocative efficiency index of farmers that belonged to cooperative society was 0.23 while their counterparts without cooperative society had index value of 0.21. The result also showed that allocative efficiency increased from 0.43 as farmer acquired high formal education and decreased to 0.16 with farmers with non-formal education. The efficiency level in the allocation of resources increased with more contact with extension services as the allocative efficeincy index increased from 0.16 to 0.31 with frequency of extension contact increasing from zero contact to maximum of twenty contacts per annum. These results confirm that increase in year of farming experience, level of education, cooperative society membership, extension contacts, credit access and farm size leads to increases efficiency. The results further show that the age of the farmers had 32% input to the efficiency but reduces to an average of 15%, as the farmer grows old. It is therefore recommended that enhanced research, extension delivery and farm advisory services should be put in place for farmers who did not attain optimum frontier level to learn how to attain the remaining 74.39% level of allocative efficiency through a better production practices from the robustly efficient farms. This will go a long way to increase the efficiency level of the farmers in the study area.Keywords: allocative efficiency, DEA, Tobit regression, tuber crop
Procedia PDF Downloads 28819154 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should evaluate properly their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, neural networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable of offering an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 6719153 Using Closed Frequent Itemsets for Hierarchical Document Clustering
Authors: Cheng-Jhe Lee, Chiun-Chieh Hsu
Abstract:
Due to the rapid development of the Internet and the increased availability of digital documents, the excessive information on the Internet has led to information overflow problem. In order to solve these problems for effective information retrieval, document clustering in text mining becomes a popular research topic. Clustering is the unsupervised classification of data items into groups without the need of training data. Many conventional document clustering methods perform inefficiently for large document collections because they were originally designed for relational database. Therefore they are impractical in real-world document clustering and require special handling for high dimensionality and high volume. We propose the FIHC (Frequent Itemset-based Hierarchical Clustering) method, which is a hierarchical clustering method developed for document clustering, where the intuition of FIHC is that there exist some common words for each cluster. FIHC uses such words to cluster documents and builds hierarchical topic tree. In this paper, we combine FIHC algorithm with ontology to solve the semantic problem and mine the meaning behind the words in documents. Furthermore, we use the closed frequent itemsets instead of only use frequent itemsets, which increases efficiency and scalability. The experimental results show that our method is more accurate than those of well-known document clustering algorithms.Keywords: FIHC, documents clustering, ontology, closed frequent itemset
Procedia PDF Downloads 39719152 Internet Health: A Cross-Sectional Survey Exploring Identified Risks and Online Safety Measures in Parent and Children with Neurodevelopmental Disorders
Authors: Abdirahim Mohamed, Sarita Rana Chhetri, Michael Sleath, Nadia Saleem
Abstract:
Rationale: Internet usage has been very much integrated into our daily lives. Internet usage within a neurodevelopmental disorder population is also on the increase. Nevertheless, there is very little empirical research on how this population virtually protect themselves; along with how their parents can keep them safe online. This topic was an ever-growing concern to the parents within our services and in many cases would add to the stresses and mental health of parents. This ignited an idea within our team to conduct research to explore the perceived online risks within this population and how they keep themselves safe. In conjunction, we also explored how parents and caregivers monitor and safeguard their young people to the potential threats online. Our hypothesis was that the perceived risks will heavily outnumber the safeguarding measures implemented by this population. Method: Within the Coventry and Warwickshire NHS Partnership Trust Child and Adolescent Mental Health Service (CAMHS), we distributed qualitative questionnaires to all the clinical bases (N=80). Questions explored topics such as daily internet usage, safeguarding measures, and perceived threats. The researchers requested for all CAMHS clinicians to identify participants. Participants in this study were accessing CAMHS for neurodevelopmental specific interventions. Results: The data were analysed using both Excel and SPSS. Within SPSS, a MANOVA was conducted and found a significant difference between safeguarding measures and perceived online risks within responses (p ≤ 0.5). This supports our hypothesis that participants in this population are well versed in the safeguarding issues of the internet; however, struggle to implement appropriate preventative measures. Data were also screened using Excel and found that all parents and carers stated they 'monitored their child’s internet use'. Conclusion: Data suggest that parents/carers may require more specific intervention to equip them with preventative measures due to the clear discrepancy between perceived risks and safeguarding measures. More research may also need to be conducted around this area to determine appropriate methodology to explore this topic further.Keywords: Internet, health , how safe are we , internet health check
Procedia PDF Downloads 26719151 Crow Search Algorithm-Based Task Offloading Strategies for Fog Computing Architectures
Authors: Aniket Ganvir, Ritarani Sahu, Suchismita Chinara
Abstract:
The rapid digitization of various aspects of life is leading to the creation of smart IoT ecosystems, where interconnected devices generate significant amounts of valuable data. However, these IoT devices face constraints such as limited computational resources and bandwidth. Cloud computing emerges as a solution by offering ample resources for offloading tasks efficiently despite introducing latency issues, especially for time-sensitive applications like fog computing. Fog computing (FC) addresses latency concerns by bringing computation and storage closer to the network edge, minimizing data travel distance, and enhancing efficiency. Offloading tasks to fog nodes or the cloud can conserve energy and extend IoT device lifespan. The offloading process is intricate, with tasks categorized as full or partial, and its optimization presents an NP-hard problem. Traditional greedy search methods struggle to address the complexity of task offloading efficiently. To overcome this, the efficient crow search algorithm (ECSA) has been proposed as a meta-heuristic optimization algorithm. ECSA aims to effectively optimize computation offloading, providing solutions to this challenging problem.Keywords: IoT, fog computing, task offloading, efficient crow search algorithm
Procedia PDF Downloads 5619150 Effect of Electronic Banking on the Performance of Deposit Money Banks in Nigeria: Using ATM and Mobile Phone as a Case Study
Authors: Charity Ifunanya Osakwe, Victoria Ogochuchukwu Obi-Nwosu, Chima Kenneth Anachedo
Abstract:
The study investigates how automated teller machines (ATM) and mobile banking affect deposit money banks in the Nigerian economy. The study made use of time series data which were obtained from the Central Bank of Nigeria Statistical Bulletin from 2009 to 2021. The Central Bank of Nigeria (CBN) data on automated teller machine and mobile phones were used to proxy electronic banking while total deposit in banks proxied the performance of deposit money banks. The analysis for the study was done using ordinary least square econometric technique with the aid of economic view statistical package. The results show that the automated teller machine has a positive and significant effect on the total deposits of deposit money banks in Nigeria and that making use of deposits of deposit money banks in Nigeria. It was concluded in the study that e-banking has equally increased banking access to customers and also created room for banks to expand their operations to more customers. The study recommends that banks in Nigeria should prioritize the expansion and maintenance of ATM networks as well as continue to invest in and develop more mobile banking services.Keywords: electronic, banking, automated teller machines, mobile, deposit
Procedia PDF Downloads 5219149 The Influence of Environmental Attributes on Children's Pedestrian-Crash Risk in School Zones
Authors: Jeongwoo Lee
Abstract:
Children are the most vulnerable travelers and they are at risk for pedestrian injury. Creating a safe route to school is important because walking to school is one of the main opportunities for promotion of needed physical exercise among children. This study examined how the built environmental attributes near an elementary school influence traffic accidents among school-aged children. The study used two complementary data sources including the locations of police-reported pedestrian crashes and the built environmental characteristics of school areas. The environmental attributes of road segments were collected through GIS measurements of local data and actual site audits using the inventory developed for measuring pedestrian-crash risk scores. The inventory data collected at 840 road segments near 32 elementary schools in the city of Ulsan. We observed all segments in a 300-meter-radius area from the entrance of an elementary school. Segments are street block faces. The inventory included 50 items, organized into four domains: accessibility (17items), pleasurability (11items), perceived safety from traffic (9items), and traffic and land-use measures (13items). Elementary schools were categorized into two groups based on the distribution of the pedestrian-crash hazard index scores. A high pedestrian-crash zone was defined as an school area within the eighth, ninth, and tenth deciles, while no pedestrian-crash zone was defined as a school zone with no pedestrian-crash accident among school-aged children between 2013 and 2016. No- and high pedestrian-crash zones were compared to determine whether different settings of the built environment near the school lead to a different rate of pedestrian-crash incidents. The results showed that a crash risk can be influenced by several environmental factors such as a shape of school-route, number of intersections, visibility and land-use in a street, and a type of sidewalk. The findings inform policy for creating safe routes to school to reduce the pedestrian-crash risk among children by focusing on school zones.Keywords: active school travel, school zone, pedestrian crash, safety route to school
Procedia PDF Downloads 24419148 Estimations of Spectral Dependence of Tropospheric Aerosol Single Scattering Albedo in Sukhothai, Thailand
Authors: Siriluk Ruangrungrote
Abstract:
Analyses of available data from MFR-7 measurement were performed and discussed on the study of tropospheric aerosol and its consequence in Thailand. Since, ASSA (w) is one of the most important parameters for a determination of aerosol effect on radioactive forcing. Here the estimation of w was directly determined in terms of the ratio of aerosol scattering optical depth to aerosol extinction optical depth (ωscat/ωext) without any utilization of aerosol computer code models. This is of benefit for providing the elimination of uncertainty causing by the modeling assumptions and the estimation of actual aerosol input data. Diurnal w of 5 cloudless-days in winter and early summer at 5 distinct wavelengths of 415, 500, 615, 673 and 870 nm with the consideration of Rayleigh scattering and atmospheric column NO2 and Ozone contents were investigated, respectively. Besides, the tendency of spectral dependence of ω representing two seasons was observed. The characteristic of spectral results reveals that during wintertime the atmosphere of the inland rural vicinity for the period of measurement possibly dominated with a lesser amount of soil dust aerosols loading than one in early summer. Hence, the major aerosol loading particularly in summer was subject to a mixture of both soil dust and biomass burning aerosols.Keywords: aerosol scattering optical depth, aerosol extinction optical depth, biomass burning aerosol, soil dust aerosol
Procedia PDF Downloads 40519147 A Comparative Analysis of Machine Learning Techniques for PM10 Forecasting in Vilnius
Authors: Mina Adel Shokry Fahim, Jūratė Sužiedelytė Visockienė
Abstract:
With the growing concern over air pollution (AP), it is clear that this has gained more prominence than ever before. The level of consciousness has increased and a sense of knowledge now has to be forwarded as a duty by those enlightened enough to disseminate it to others. This realisation often comes after an understanding of how poor air quality indices (AQI) damage human health. The study focuses on assessing air pollution prediction models specifically for Lithuania, addressing a substantial need for empirical research within the region. Concentrating on Vilnius, it specifically examines particulate matter concentrations 10 micrometers or less in diameter (PM10). Utilizing Gaussian Process Regression (GPR) and Regression Tree Ensemble, and Regression Tree methodologies, predictive forecasting models are validated and tested using hourly data from January 2020 to December 2022. The study explores the classification of AP data into anthropogenic and natural sources, the impact of AP on human health, and its connection to cardiovascular diseases. The study revealed varying levels of accuracy among the models, with GPR achieving the highest accuracy, indicated by an RMSE of 4.14 in validation and 3.89 in testing.Keywords: air pollution, anthropogenic and natural sources, machine learning, Gaussian process regression, tree ensemble, forecasting models, particulate matter
Procedia PDF Downloads 5119146 Challenges of Skill Training among Women with Intellectual Disability: Stakeholders' Perspective
Authors: Jayanti Pujari
Abstract:
The present study attempts to find out the barriers faced by adult women with an Intellectual disability during their training at vocational training centres offered by rehabilitation institutes. As economic independence is the ultimate aim of rehabilitation, this study tries to focus on the barriers which restrict the adult women with intellectual disability in equipping themselves in required skill which can really empower them and help them in independent living. The objectives of the study are (1) To find out the barriers perceived by job coaches during training given to women with intellectual disability (2) To find out the barriers perceived by the parents of women with intellectual disability who are undergoing vocational training and (3) To find out the barriers perceived by the women with intellectual disabilities during the vocational training. The barriers have been operationalised in the present study from three perspectives such as behavioural barriers, competency related barriers and accessibility barriers. For the present study three groups of participants(N=60) have been selected through purposive nonprobability sampling procedure to generate the data. They are( 20) job coaches who are working at vocational centres, (20) parents of women with intellectual disabilities, (20) adult women with intellectual disabilities. The study followed a descriptive research design and data are generated through self developed questionnaire. Three sets of self-developed and face validated questionnaires were used as the tool to gather the data from the three categories of sample. The questionnaire has 30 close ended questions and the respondents have to answer on a three point scale (yes, no, need help). Both qualitative and quantitative analysis was conducted to test the hypothesis. The major findings of the study depict that the 87% of the women with intellectual disability perceived highest barriers related to competency whereas barriers related to behaviour and accessibility are perceived lowest. 92% of job coaches perceived barriers related to competencies and accessibility are highest which hinder the effectiveness of skill development of women with intellectual disability and 74% of the parents of adult women with intellectual disability also opines that the barriers related to competencies and accessibility are highest. In conclusion, it is stressed that there is need to create awareness among the stakeholders about the training and management strategies of skill training and positive behaviour support which will surely enable the adult women with intellectual disability to utilise their residual skill and acquire training to become economically independent.Keywords: economic independence, intellectual disability, skill development, training barrier
Procedia PDF Downloads 22119145 STML: Service Type-Checking Markup Language for Services of Web Components
Authors: Saqib Rasool, Adnan N. Mian
Abstract:
Web components are introduced as the latest standard of HTML5 for writing modular web interfaces for ensuring maintainability through the isolated scope of web components. Reusability can also be achieved by sharing plug-and-play web components that can be used as off-the-shelf components by other developers. A web component encapsulates all the required HTML, CSS and JavaScript code as a standalone package which must be imported for integrating a web component within an existing web interface. It is then followed by the integration of web component with the web services for dynamically populating its content. Since web components are reusable as off-the-shelf components, these must be equipped with some mechanism for ensuring their proper integration with web services. The consistency of a service behavior can be verified through type-checking. This is one of the popular solutions for improving the quality of code in many programming languages. However, HTML does not provide type checking as it is a markup language and not a programming language. The contribution of this work is to introduce a new extension of HTML called Service Type-checking Markup Language (STML) for adding support of type checking in HTML for JSON based REST services. STML can be used for defining the expected data types of response from JSON based REST services which will be used for populating the content within HTML elements of a web component. Although JSON has five data types viz. string, number, boolean, object and array but STML is made to supports only string, number and object. This is because of the fact that both object and array are considered as string, when populated in HTML elements. In order to define the data type of any HTML element, developer just needs to add the custom STML attributes of st-string, st-number and st-boolean for string, number and boolean respectively. These all annotations of STML are used by the developer who is writing a web component and it enables the other developers to use automated type-checking for ensuring the proper integration of their REST services with the same web component. Two utilities have been written for developers who are using STML based web components. One of these utilities is used for automated type-checking during the development phase. It uses the browser console for showing the error description if integrated web service is not returning the response with expected data type. The other utility is a Gulp based command line utility for removing the STML attributes before going in production. This ensures the delivery of STML free web pages in the production environment. Both of these utilities have been tested to perform type checking of REST services through STML based web components and results have confirmed the feasibility of evaluating service behavior only through HTML. Currently, STML is designed for automated type-checking of integrated REST services but it can be extended to introduce a complete service testing suite based on HTML only, and it will transform STML from Service Type-checking Markup Language to Service Testing Markup Language.Keywords: REST, STML, type checking, web component
Procedia PDF Downloads 25119144 Synchronous Reference Frame and Instantaneous P-Q Theory Based Control of Unified Power Quality Conditioner for Power Quality Improvement of Distribution System
Authors: Ambachew Simreteab Gebremedhn
Abstract:
Context: The paper explores the use of synchronous reference frame theory (SRFT) and instantaneous reactive power theory (IRPT) based control of Unified Power Quality Conditioner (UPQC) for improving power quality in distribution systems. Research Aim: To investigate the performance of different control configurations of UPQC using SRFT and IRPT for mitigating power quality issues in distribution systems. Methodology: The study compares three control techniques (SRFT-IRPT, SRFT-SRFT, IRPT-IRPT) implemented in series and shunt active filters of UPQC. Data is collected under various control algorithms to analyze UPQC performance. Findings: Results indicate the effectiveness of SRFT and IRPT based control techniques in addressing power quality problems such as voltage sags, swells, unbalance, harmonics, and current harmonics in distribution systems. Theoretical Importance: The study provides insights into the application of SRFT and IRPT in improving power quality, specifically in mitigating unbalanced voltage sags, where conventional methods fall short. Data Collection: Data is collected under various control algorithms using simulation in MATLAB Simulink and real-time operation executed with experimental results obtained using RT-LAB. Analysis Procedures: Performance analysis of UPQC under different control algorithms is conducted to evaluate the effectiveness of SRFT and IRPT based control techniques in mitigating power quality issues. Questions Addressed: How do SRFT and IRPT based control techniques compare in improving power quality in distribution systems? What is the impact of using different control configurations on the performance of UPQC? Conclusion: The study demonstrates the efficacy of SRFT and IRPT based control of UPQC in mitigating power quality issues in distribution systems, highlighting their potential for enhancing voltage and current quality.Keywords: power quality, UPQC, shunt active filter, series active filter, non-linear load, RT-LAB, MATLAB
Procedia PDF Downloads 619143 Correlation of Material Mechanical Characteristics Obtained by Means of Standardized and Miniature Test Specimens
Authors: Vaclav Mentl, P. Zlabek, J. Volak
Abstract:
New methods of mechanical testing were developed recently that are based on making use of miniature test specimens (e.g. Small Punch Test). The most important advantage of these method is the nearly non-destructive withdrawal of test material and small size of test specimen what is interesting in cases of remaining lifetime assessment when a sufficient volume of the representative material cannot be withdrawn of the component in question. In opposite, the most important disadvantage of such methods stems from the necessity to correlate test results with the results of standardised test procedures and to build up a database of material data in service. The correlations among the miniature test specimen data and the results of standardised tests are necessary. The paper describes the results of fatigue tests performed on miniature tests specimens in comparison with traditional fatigue tests for several steels applied in power producing industry. Special miniature test specimens fixtures were designed and manufactured for the purposes of fatigue testing at the Zwick/Roell 10HPF5100 testing machine. The miniature test specimens were produced of the traditional test specimens. Seven different steels were fatigue loaded (R = 0.1) at room temperature.Keywords: mechanical properties, miniature test specimens, correlations, small punch test, micro-tensile test, mini-charpy impact test
Procedia PDF Downloads 53519142 Comparison of Methodologies to Compute the Probabilistic Seismic Hazard Involving Faults and Associated Uncertainties
Authors: Aude Gounelle, Gloria Senfaute, Ludivine Saint-Mard, Thomas Chartier
Abstract:
The long-term deformation rates of faults are not fully captured by Probabilistic Seismic Hazard Assessment (PSHA). PSHA that use catalogues to develop area or smoothed-seismicity sources is limited by the data available to constraint future earthquakes activity rates. The integration of faults in PSHA can at least partially address the long-term deformation. However, careful treatment of fault sources is required, particularly, in low strain rate regions, where estimated seismic hazard levels are highly sensitive to assumptions concerning fault geometry, segmentation and slip rate. When integrating faults in PSHA various constraints on earthquake rates from geologic and seismologic data have to be satisfied. For low strain rate regions where such data is scarce it would be especially challenging. Faults in PSHA requires conversion of the geologic and seismologic data into fault geometries, slip rates and then into earthquake activity rates. Several approaches exist for translating slip rates into earthquake activity rates. In the most frequently used approach, the background earthquakes are handled using a truncated approach, in which earthquakes with a magnitude lower or equal to a threshold magnitude (Mw) occur in the background zone, with a rate defined by the rate in the earthquake catalogue. Although magnitudes higher than the threshold are located on the fault with a rate defined using the average slip rate of the fault. As high-lighted by several research, seismic events with magnitudes stronger than the selected magnitude threshold may potentially occur in the background and not only at the fault, especially in regions of slow tectonic deformation. It also has been known that several sections of a fault or several faults could rupture during a single fault-to-fault rupture. It is then essential to apply a consistent modelling procedure to allow for a large set of possible fault-to-fault ruptures to occur aleatory in the hazard model while reflecting the individual slip rate of each section of the fault. In 2019, a tool named SHERIFS (Seismic Hazard and Earthquake Rates in Fault Systems) was published. The tool is using a methodology to calculate the earthquake rates in a fault system where the slip-rate budget of each fault is conversed into rupture rates for all possible single faults and faultto-fault ruptures. The objective of this paper is to compare the SHERIFS method with one other frequently used model to analyse the impact on the seismic hazard and through sensibility studies better understand the influence of key parameters and assumptions. For this application, a simplified but realistic case study was selected, which is in an area of moderate to hight seismicity (South Est of France) and where the fault is supposed to have a low strain.Keywords: deformation rates, faults, probabilistic seismic hazard, PSHA
Procedia PDF Downloads 62