Search results for: multiple stepwise regression analysis
30446 An Ultra-Low Output Impedance Power Amplifier for Tx Array in 7-Tesla Magnetic Resonance Imaging
Authors: Ashraf Abuelhaija, Klaus Solbach
Abstract:
In Ultra high-field MRI scanners (3T and higher), parallel RF transmission techniques using multiple RF chains with multiple transmit elements are a promising approach to overcome the high-field MRI challenges in terms of inhomogeneity in the RF magnetic field and SAR. However, mutual coupling between the transmit array elements disturbs the desirable independent control of the RF waveforms for each element. This contribution demonstrates a 18 dB improvement of decoupling (isolation) performance due to the very low output impedance of our 1 kW power amplifier.Keywords: EM coupling, inter-element isolation, magnetic resonance imaging (mri), parallel transmit
Procedia PDF Downloads 49530445 The Use of Mobile Phone as Enhancement to Mark Multiple Choice Objectives English Grammar and Literature Examination: An Exploratory Case Study of Preliminary National Diploma Students, Abdu Gusau Polytechnic, Talata Mafara, Zamfara State, Nigeria
Authors: T. Abdulkadir
Abstract:
Most often, marking and assessment of multiple choice kinds of examinations have been opined by many as a cumbersome and herculean task to accomplished manually in Nigeria. Usually this may be in obvious nexus to the fact that mass numbers of candidates were known to take the same examination simultaneously. Eventually, marking such a mammoth number of booklets dared and dread even the fastest paid examiners who often undertake the job with the resulting consequences of stress and boredom. This paper explores the evolution, as well as the set aim to envision and transcend marking the Multiple Choice Objectives- type examination into a thing of creative recreation, or perhaps a more relaxing activity via the use of the mobile phone. A more “pragmatic” dimension method was employed to achieve this work, rather than the formal “in-depth research” based approach due to the “novelty” of the mobile-smartphone e-Marking Scheme discovery. Moreover, being an evolutionary scheme, no recent academic work shares a direct same topic concept with the ‘use of cell phone as an e-marking technique’ was found online; thus, the dearth of even miscellaneous citations in this work. Additional future advancements are what steered the anticipatory motive of this paper which laid the fundamental proposition. However, the paper introduces for the first time the concept of mobile-smart phone e-marking, the steps to achieve it, as well as the merits and demerits of the technique all spelt out in the subsequent pages.Keywords: cell phone, e-marking scheme (eMS), mobile phone, mobile-smart phone, multiple choice objectives (MCO), smartphone
Procedia PDF Downloads 25930444 Job Satisfaction and Commitment among Academic Staff of Selected Colleges of Education in Kano and Kaduna States of Nigeria
Authors: Mary Okonkwo Ekwy
Abstract:
The problem of the growing disillusionment of College of Education teachers with academic life vis-à-vis their job satisfaction and commitment was investigated in this study with a view to finding out if both their job satisfaction and commitment have suffered, and to find out if there was a relationship between job satisfaction and commitment among these College of Education teachers. Due consideration was also given in the study to the possible effects of demographic variables on attitudes to their job. To carry out a study of job satisfaction and commitment among the College of Education teachers and to explore the relationship between them, research instruments were used for measuring the levels of job satisfaction and commitment among them. A sample of 200 Colleges of Education teachers, comprising 15 Professors, 9 Principal Lecturers, 70 Senior Lecturer and 106 Lecturers was used for the study. Five major hypothesis were tested with regard to the relationship between job satisfaction and commitment among the teachers. The Pearson correlation, the F-ratio, and regression analysis were used for data analysis and hypothesis testing. The result of this investigation suggests that, perhaps the best way to secure the commitment of teachers is to ensure their job satisfaction. Future investigations will further enrich our knowledge about these very important themes.Keywords: job satisfaction, commitment, academic staff, college of education
Procedia PDF Downloads 55230443 Groundhog Day as a Model for the Repeating Spectator and the Film Academic: Re-Watching the Same Films Again Can Create Different Experiences and Ideas
Authors: Leiya Ho Yin Lee
Abstract:
Groundhog Day (Harold Ramis, 1993) may seemingly be a fairly unremarkable Hollywood comedy film in the 90s, it is argued that the film, with its protagonist Phil (Bill Murray), inadvertently, but perfectly, demonstrates an important aspect in filmmaking, film spectatorship and film research: repetition. Very rarely does a narrative film use one, and only one, take in its shooting. The multiple ‘repeats’ of Phil’s various endeavours due to his being trapped in a perpetual loop of the same day — from stealing money and tricking a woman into a casual relationship, to his multiple suicides, to eventually helping people in need — make the process of doing multiple ‘takes’ in filmmaking explicit. But perhaps more significantly, Phil represents a perfect model for the spectator/cinephile who has seen their favourite film for multiple times that they can remember every single detail. Crucially, their favourite film never changes, as it is a recording, but the cinephile’s experience of that very same film is most likely different each time they watch it again, just as Phil’s character and personality has completely transformed, from selfish and egotistic, to depressed and nihilistic, and ultimately to sympathetic and caring, even though he is living the exact same day. Furthermore, the author did not come up with this stimulating juxtaposition of film spectatorship and Groundhog Day the first time the author saw the film; it took the author a few casual re-viewings to notice the film’s self-reflexivity. And then, when working on it in the author’s research, the author had to re-view the film for more times, and have subsequently noticed even more things previously unnoticed. In this way, Groundhog Day not only stands for a model for filmmaking and film spectatorship, it also illustrates the act of academic research, especially in Film Studies where repeatedly viewing the same films is a prerequisite before new ideas and concepts are discovered from old material. This also recalls Deleuze’s thesis on difference and repetition in that repetition creates difference and it is difference that creates thought.Keywords: narrative comprehension, repeated viewing, repetition, spectatorship
Procedia PDF Downloads 32030442 Geospatial Network Analysis Using Particle Swarm Optimization
Authors: Varun Singh, Mainak Bandyopadhyay, Maharana Pratap Singh
Abstract:
The shortest path (SP) problem concerns with finding the shortest path from a specific origin to a specified destination in a given network while minimizing the total cost associated with the path. This problem has widespread applications. Important applications of the SP problem include vehicle routing in transportation systems particularly in the field of in-vehicle Route Guidance System (RGS) and traffic assignment problem (in transportation planning). Well known applications of evolutionary methods like Genetic Algorithms (GA), Ant Colony Optimization, Particle Swarm Optimization (PSO) have come up to solve complex optimization problems to overcome the shortcomings of existing shortest path analysis methods. It has been reported by various researchers that PSO performs better than other evolutionary optimization algorithms in terms of success rate and solution quality. Further Geographic Information Systems (GIS) have emerged as key information systems for geospatial data analysis and visualization. This research paper is focused towards the application of PSO for solving the shortest path problem between multiple points of interest (POI) based on spatial data of Allahabad City and traffic speed data collected using GPS. Geovisualization of results of analysis is carried out in GIS.Keywords: particle swarm optimization, GIS, traffic data, outliers
Procedia PDF Downloads 48330441 Using Simulation Modeling Approach to Predict USMLE Steps 1 and 2 Performances
Authors: Chau-Kuang Chen, John Hughes, Jr., A. Dexter Samuels
Abstract:
The prediction models for the United States Medical Licensure Examination (USMLE) Steps 1 and 2 performances were constructed by the Monte Carlo simulation modeling approach via linear regression. The purpose of this study was to build robust simulation models to accurately identify the most important predictors and yield the valid range estimations of the Steps 1 and 2 scores. The application of simulation modeling approach was deemed an effective way in predicting student performances on licensure examinations. Also, sensitivity analysis (a/k/a what-if analysis) in the simulation models was used to predict the magnitudes of Steps 1 and 2 affected by changes in the National Board of Medical Examiners (NBME) Basic Science Subject Board scores. In addition, the study results indicated that the Medical College Admission Test (MCAT) Verbal Reasoning score and Step 1 score were significant predictors of the Step 2 performance. Hence, institutions could screen qualified student applicants for interviews and document the effectiveness of basic science education program based on the simulation results.Keywords: prediction model, sensitivity analysis, simulation method, USMLE
Procedia PDF Downloads 33930440 Effects of Handheld Video Games on Interpersonal Relationships: A Two-Wave Panel Study on Elementary School Students
Authors: Kanae Suzuki
Abstract:
Handheld video games are popular communication tools among Japanese elementary school students today. This study aims to examine the effects of the use of handheld video games on interpersonal relationships of the students in real and virtual worlds. A two-wave panel survey was conducted for students of ten elementary schools at an interval of approximately six months. The survey questionnaire included questions about the average amount of time spent playing a handheld video game during the past one month, the frequency of communication with players during game play, and the interpersonal relationships, such as the number of real and virtual friends the students have. A multiple regression model was constructed for 324 students to examine causal relationships. The results indicated that the more frequently the students communicated with other players while playing games, the number of the real friends tended to increase. In contrast, no significant effect of the total time spent playing games was found on interpersonal relationships. The findings suggested that communication during game play is an important factor for improving interpersonal relationships of this age group.Keywords: communication, real friend, social adjustment, virtual friend
Procedia PDF Downloads 49130439 The Relationship between Self-Injurious Behavior and Manner of Death
Authors: Sait Ozsoy, Hacer Yasar Teke, Mustafa Dalgic, Cetin Ketenci, Ertugrul Gok, Kenan Karbeyaz, Azem Irez, Mesut Akyol
Abstract:
Self-mutilating behavior or self-injury behavior (SIB) is defined as: intentional harm to one’s body without intends to commit suicide”. SIB cases are commonly seen in psychiatry and forensic medicine practices. Despite variety of SIB methods, cuts in the skin is the most common (70-97%) injury in this group of patients. Subjects with SIB have one or more other comorbidities which include depression, anxiety, depersonalization, and feeling of worthlessness, borderline personality disorder, antisocial behaviors, and histrionic personality. These individuals feel a high level of hostility towards themselves and their surroundings. Researches have also revealed a strong relationship between antisocial personality disorder, criminal behavior, and SIB. This study has retrospectively evaluated 6,599 autopsy cases performed at forensic medicine institutes of six major cities (Ankara, Izmir, Diyarbakir, Erzurum, Trabzon, Eskisehir) of Turkey in 2013. The study group consisted of all cases with SIB findings (psychopathic cuts, cigarette burns, scars, and etc.). The relationship between causes of death in the study group (SIB subjects) and the control group was investigated. The control group was created from subjects without signs of SIB. Mann-Whitney U test was used for age variables and Chi-square test for categorical variables. Multinomial logistic regression analysis was used in order to analyze group differences in respect to manner of death (natural, accident, homicide, suicide) and analysis of risk factors associated with each group was determined by the Binomial logistic regression analysis. This study used SPSS statistics 15.0 for all its statistical and calculation needs. The statistical significance was p <0.05. There was no significant difference between accidental and natural death among the groups (p=0.737). Also there was a unit increase in number of cuts in psychopathic group while number of accidental death decreased (95% CI: 0.941-0.993) by 0.967 times (p=0.015). In contrast, there was a significant difference between suicidal and natural death (p<0.001), and also between homicidal and natural death (p=0.025). SIB is often seen with borderline and antisocial personality disorder but may be associated with many psychiatric illnesses. Studies have shown a relationship between antisocial personality disorders with criminal behavior and SIB with suicidal behavior. In our study, rate of suicide, murder and intoxication was higher compared to the control group. It could be concluded that SIB can be used as a predictor of possibility of one’s harm to him/herself and other people.Keywords: autopsy, cause of death, forensic science, self-injury behaviour
Procedia PDF Downloads 51030438 Heart Ailment Prediction Using Machine Learning Methods
Authors: Abhigyan Hedau, Priya Shelke, Riddhi Mirajkar, Shreyash Chaple, Mrunali Gadekar, Himanshu Akula
Abstract:
The heart is the coordinating centre of the major endocrine glandular structure of the body, which produces hormones that profoundly affect the operations of the body, and diagnosing cardiovascular disease is a difficult but critical task. By extracting knowledge and information about the disease from patient data, data mining is a more practical technique to help doctors detect disorders. We use a variety of machine learning methods here, including logistic regression and support vector classifiers (SVC), K-nearest neighbours Classifiers (KNN), Decision Tree Classifiers, Random Forest classifiers and Gradient Boosting classifiers. These algorithms are applied to patient data containing 13 different factors to build a system that predicts heart disease in less time with more accuracy.Keywords: logistic regression, support vector classifier, k-nearest neighbour, decision tree, random forest and gradient boosting
Procedia PDF Downloads 5130437 A Machine Learning-based Study on the Estimation of the Threat Posed by Orbital Debris
Authors: Suhani Srivastava
Abstract:
This research delves into the classification of orbital debris through machine learning (ML): it will categorize the intensity of the threat orbital debris poses through multiple ML models to gain an insight into effectively estimating the danger specific orbital debris can pose to future space missions. As the space industry expands, orbital debris becomes a growing concern in Low Earth Orbit (LEO) because it can potentially obfuscate space missions due to the increased orbital debris pollution. Moreover, detecting orbital debris and identifying its characteristics has become a major concern in Space Situational Awareness (SSA), and prior methods of solely utilizing physics can become inconvenient in the face of the growing issue. Thus, this research focuses on approaching orbital debris concerns through machine learning, an efficient and more convenient alternative, in detecting the potential threat certain orbital debris pose. Our findings found that the Logistic regression machine worked the best with a 98% accuracy and this research has provided insight into the accuracies of specific machine learning models when classifying orbital debris. Our work would help provide space shuttle manufacturers with guidelines about mitigating risks, and it would help in providing Aerospace Engineers facilities to identify the kinds of protection that should be incorporated into objects traveling in the LEO through the predictions our models provide.Keywords: aerospace, orbital debris, machine learning, space, space situational awareness, nasa
Procedia PDF Downloads 2130436 Ambiguity Resolution for Ground-based Pulse Doppler Radars Using Multiple Medium Pulse Repetition Frequency
Authors: Khue Nguyen Dinh, Loi Nguyen Van, Thanh Nguyen Nhu
Abstract:
In this paper, we propose an adaptive method to resolve ambiguities and a ghost target removal process to extract targets detected by a ground-based pulse-Doppler radar using medium pulse repetition frequency (PRF) waveforms. The ambiguity resolution method is an adaptive implementation of the coincidence algorithm, which is implemented on a two-dimensional (2D) range-velocity matrix to resolve range and velocity ambiguities simultaneously, with a proposed clustering filter to enhance the anti-error ability of the system. Here we consider the scenario of multiple target environments. The ghost target removal process, which is based on the power after Doppler processing, is proposed to mitigate ghosting detections to enhance the performance of ground-based radars using a short PRF schedule in multiple target environments. Simulation results on a ground-based pulsed Doppler radar model will be presented to show the effectiveness of the proposed approach.Keywords: ambiguity resolution, coincidence algorithm, medium PRF, ghosting removal
Procedia PDF Downloads 15130435 Competitor Integration with Voice of Customer Ratings in QFD Studies Using Geometric Mean Based on AHP
Authors: Zafar Iqbal, Nigel P. Grigg, K. Govindaraju, Nicola M. Campbell-Allen
Abstract:
Quality Function Deployment (QFD) is structured approach. It has been used to improve the quality of products and process in a wide range of fields. Using this systematic tool, practitioners normally rank Voice of Customer ratings (VoCs) in order to produce Improvement Ratios (IRs) which become the basis for prioritising process / product design or improvement activities. In one matrix of the House of Quality (HOQ) competitors are rated. The method of obtaining improvement ratios (IRs) does not always integrate the competitors’ rating in a systematic way that fully utilises competitor rating information. This can have the effect of diverting QFD practitioners’ attention from a potentially important VOC to less important VOC. In order to enhance QFD analysis, we present a more systematic method for integrating competitor ratings, utilising the geometric mean of the customer rating matrix. In this paper we develop a new approach, based on the Analytic Hierarchy Process (AHP), in which we generating a matrix of multiple comparisons of all competitors, and derive a geometric mean for each competitor. For each VOC an improved IR is derived which-we argue herein - enhances the initial VOC importance ratings by integrating more information about competitor performance. In this way, our method can help overcome one of the possible shortcomings of QFD. We then use a published QFD example from literature as a case study to demonstrate the use of the new AHP-based IRs, and show how these can be used to re-rank existing VOCs to -arguably- better achieve the goal of customer satisfaction in relation VOC ratings and competitors’ rankings. We demonstrate how two dimensional AHP-based geometric mean derived from the multiple competitor comparisons matrix can be useful for analysing competitors’ rankings. Our method utilises an established methodology (AHP) applied within an established application (QFD), but in an original way (through the competitor analysis matrix), to achieve a novel improvement.Keywords: quality function deployment, geometric mean, improvement ratio, AHP, competitors ratings
Procedia PDF Downloads 36930434 Least Squares Method Identification of Corona Current-Voltage Characteristics and Electromagnetic Field in Electrostatic Precipitator
Authors: H. Nouri, I. E. Achouri, A. Grimes, H. Ait Said, M. Aissou, Y. Zebboudj
Abstract:
This paper aims to analysis the behaviour of DC corona discharge in wire-to-plate electrostatic precipitators (ESP). Current-voltage curves are particularly analysed. Experimental results show that discharge current is strongly affected by the applied voltage. The proposed method of current identification is to use the method of least squares. Least squares problems that of into two categories: linear or ordinary least squares and non-linear least squares, depending on whether or not the residuals are linear in all unknowns. The linear least-squares problem occurs in statistical regression analysis; it has a closed-form solution. A closed-form solution (or closed form expression) is any formula that can be evaluated in a finite number of standard operations. The non-linear problem has no closed-form solution and is usually solved by iterative.Keywords: electrostatic precipitator, current-voltage characteristics, least squares method, electric field, magnetic field
Procedia PDF Downloads 43130433 Analysis of Effect of Microfinance on the Profit Level of Small and Medium Scale Enterprises in Lagos State, Nigeria
Authors: Saheed Olakunle Sanusi, Israel Ajibade Adedeji
Abstract:
The study analysed the effect of microfinance on the profit level of small and medium scale enterprises in Lagos. The data for the study were obtained by simple random sampling, and total of one hundred and fifty (150) small and medium scale enterprises (SMEs) were sampled for the study. Seventy-five (75) each are microfinance users and non-users. Data were analysed using descriptive statistics, logit model, t-test and ordinary least square (OLS) regression. The mean profit of the enterprises using microfinance is ₦16.8m, while for the non-users of microfinance is ₦5.9m. The mean profit of microfinance users is statistically different from the non-users. The result of the logit model specified for the determinant of access to microfinance showed that three of specified variables- educational status of the enterprise head, credit utilisation and volume of business investment are significant at P < 0.01. Enterprises with many years of experience, highly educated enterprise heads and high volume of business investment have more potential access to microfinance. The OLS regression model indicated that three parameters namely number of school years, the volume of business investment and (dummy) participation in microfinance were found to be significant at P < 0.05. These variables are therefore significant determinants of impacts of microfinance on profit level in the study area. The study, therefore, concludes and recommends that to improve the status of small and medium scale enterprises for an increase in profit, the full benefit of access to microfinance can be enhanced through investment in social infrastructure and human capital development. Also, concerted efforts should be made to encouraged non-users of microfinance among SMEs to use it in order to boost their profit.Keywords: credit utilisation, logit model, microfinance, small and medium enterprises
Procedia PDF Downloads 20530432 Banking and Accounting Analysis Researches Effect on Environment and Income
Authors: Gerges Samaan Henin Abdalla
Abstract:
Ultra-secured methods of banking services have been introduced to the customer, such as online banking. Banks have begun to consider electronic banking (e-banking) as a way to replace some traditional branch functions by using the Internet as a distribution channel. Some consumers have at least one account at multiple banks and access these accounts through online banking. To check their current net worth, clients need to log into each of their accounts, get detailed information, and work toward consolidation. Not only is it time consuming, but it is also a repeatable activity with a certain frequency. To solve this problem, the concept of account aggregation was added as a solution. Account consolidation in e-banking as a form of electronic banking appears to build a stronger relationship with customers. An account linking service is generally referred to as a service that allows customers to manage their bank accounts held at different institutions via a common online banking platform that places a high priority on security and data protection. Consumers have at least one account at multiple banks and access these accounts through online banking. To check their current net worth, clients need to log into each of their accounts, get detailed information, and work toward consolidation. The article provides an overview of the account aggregation approach in e-banking as a new service in the area of e-banking.Keywords: compatibility, complexity, mobile banking, observation, risk banking technology, Internet banks, modernization of banks, banks, account aggregation, security, enterprise development
Procedia PDF Downloads 4530431 Updating Stochastic Hosting Capacity Algorithm for Voltage Optimization Programs and Interconnect Standards
Authors: Nicholas Burica, Nina Selak
Abstract:
The ADHCAT (Automated Distribution Hosting Capacity Assessment Tool) was designed to run Hosting Capacity Analysis on the ComEd system via a stochastic DER (Distributed Energy Resource) placement on multiple power flow simulations against a set of violation criteria. The violation criteria in the initial version of the tool captured a limited amount of issues that individual departments design against for DER interconnections. Enhancements were made to the tool to further align with individual department violation and operation criteria, as well as the addition of new modules for use for future load profile analysis. A reporting engine was created for future analytical use based on the simulations and observations in the tool.Keywords: distributed energy resources, hosting capacity, interconnect, voltage optimization
Procedia PDF Downloads 19030430 Alcohol and Tobacco Influencing Prevalence of Hypertension among 15-54 Old Indian Men: An Application of Discriminant Analysis Using National Family Health Survey, 2015-16
Authors: Chander Shekhar, Jeetendra Yadav, Shaziya Allarakha
Abstract:
Hypertension has been described as an 'iceberg disease' as those who suffered are ignored and hence usually seek healthcare services at a very late stage. It is estimated that more than 2 million Indians are suffering from hypertensive heart disease that contributed to above 0.13 million deaths in 2016. The paper study aims to know the prevalence of Hypertension in India and its variation by socioeconomic backgrounds and to find out risk factors discriminating hypertension with special emphasis on consumption of tobacco and alcohol among men aged 15-54 years in India. The paper uses NFHS (2015-16) data. The paper used binary logistic regression and discriminant analysis to find significant predictors and discriminants of interest. The prevalence of hypertension was 16.5% in the study population. The results suggest that consumption of alcohol and tobacco are significant discriminant characteristics in carrying hypertension irrespective of what socioeconomic background characteristic he possesses.Keywords: hypertention, alcohol, tobacco, discriminant
Procedia PDF Downloads 14730429 Socioeconomic Factors Associated with the Knowledge, Attitude, and Practices of Oil Palm Smallholders toward Ganoderma Disease
Authors: K. Assis, B. Bonaventure, A. Abdul Rahim, H. Affendy, A. Mohammad Amizi
Abstract:
Oil palm smallholders are considered as a very important producer of oil palm in Malaysia. They are categorized into two, which are organized smallholder and independent smallholder. In this study, there were 1000 oil palms smallholders have been interviewed by using a structured questionnaire. The main objective of the survey is to identify the relationship between socioeconomic characteristics of smallholders with their knowledge, attitude, and practices toward Ganoderma disease. The locations of study include Peninsular Malaysia and Sabah. There were three important aspects studied, namely knowledge of Ganoderma disease, attitude towards the disease as well as the practices in managing the disease. Cluster analysis, factor analysis, and binary logistic regression were used to analyze the data collected. The findings of the study should provide a baseline data which can be used by the relevant agencies to conduct programs or to formulate a suitable development plan to improve the knowledge, attitude and practices of oil palm smallholders in managing Ganoderma disease.Keywords: attitude, Ganoderma, knowledge, oil palm, practices, smallholders
Procedia PDF Downloads 39830428 Computational Modelling of pH-Responsive Nanovalves in Controlled-Release System
Authors: Tomilola J. Ajayi
Abstract:
A category of nanovalves system containing the α-cyclodextrin (α-CD) ring on a stalk tethered to the pores of mesoporous silica nanoparticles (MSN) is theoretically and computationally modelled. This functions to control opening and blocking of the MSN pores for efficient targeted drug release system. Modeling of the nanovalves is based on the interaction between α-CD and the stalk (p-anisidine) in relation to pH variation. Conformational analysis was carried out prior to the formation of the inclusion complex, to find the global minimum of both neutral and protonated stalk. B3LYP/6-311G**(d, p) basis set was employed to attain all theoretically possible conformers of the stalk. Six conformers were taken into considerations, and the dihedral angle (θ) around the reference atom (N17) of the p-anisidine stalk was scanned from 0° to 360° at 5° intervals. The most stable conformer was obtained at a dihedral angle of 85.3° and was fully optimized at B3LYP/6-311G**(d, p) level of theory. The most stable conformer obtained from conformational analysis was used as the starting structure to create the inclusion complexes. 9 complexes were formed by moving the neutral guest into the α-CD cavity along the Z-axis in 1 Å stepwise while keeping the distance between dummy atom and OMe oxygen atom on the stalk restricted. The dummy atom and the carbon atoms on α-CD structure were equally restricted for orientation A (see Scheme 1). The generated structures at each step were optimized with B3LYP/6-311G**(d, p) methods to determine their energy minima. Protonation of the nitrogen atom on the stalk occurs at acidic pH, leading to unsatisfactory host-guest interaction in the nanogate; hence there is dethreading. High required interaction energy and conformational change are theoretically established to drive the release of α-CD at a certain pH. The release was found to occur between pH 5-7 which agreed with reported experimental results. In this study, we applied the theoretical model for the prediction of the experimentally observed pH-responsive nanovalves which enables blocking, and opening of mesoporous silica nanoparticles pores for targeted drug release system. Our results show that two major factors are responsible for the cargo release at acidic pH. The higher interaction energy needed for the complex/nanovalve formation to exist after protonation as well as conformational change upon protonation are driving the release due to slight pH change from 5 to 7.Keywords: nanovalves, nanogate, mesoporous silica nanoparticles, cargo
Procedia PDF Downloads 12330427 Adaptive Neuro Fuzzy Inference System Model Based on Support Vector Regression for Stock Time Series Forecasting
Authors: Anita Setianingrum, Oki S. Jaya, Zuherman Rustam
Abstract:
Forecasting stock price is a challenging task due to the complex time series of the data. The complexity arises from many variables that affect the stock market. Many time series models have been proposed before, but those previous models still have some problems: 1) put the subjectivity of choosing the technical indicators, and 2) rely upon some assumptions about the variables, so it is limited to be applied to all datasets. Therefore, this paper studied a novel Adaptive Neuro-Fuzzy Inference System (ANFIS) time series model based on Support Vector Regression (SVR) for forecasting the stock market. In order to evaluate the performance of proposed models, stock market transaction data of TAIEX and HIS from January to December 2015 is collected as experimental datasets. As a result, the method has outperformed its counterparts in terms of accuracy.Keywords: ANFIS, fuzzy time series, stock forecasting, SVR
Procedia PDF Downloads 24730426 Comparative Study of the Effects of Process Parameters on the Yield of Oil from Melon Seed (Cococynthis citrullus) and Coconut Fruit (Cocos nucifera)
Authors: Ndidi F. Amulu, Patrick E. Amulu, Gordian O. Mbah, Callistus N. Ude
Abstract:
Comparative analysis of the properties of melon seed, coconut fruit and their oil yield were evaluated in this work using standard analytical technique AOAC. The results of the analysis carried out revealed that the moisture contents of the samples studied are 11.15% (melon) and 7.59% (coconut). The crude lipid content are 46.10% (melon) and 55.15% (coconut).The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant difference (p < 0.05) in yield between the samples, with melon oil seed flour having a higher percentage range of oil yield (41.30 – 52.90%) and coconut (36.25 – 49.83%). The physical characterization of the extracted oil was also carried out. The values gotten for refractive index are 1.487 (melon seed oil) and 1.361 (coconut oil) and viscosities are 0.008 (melon seed oil) and 0.002 (coconut oil). The chemical analysis of the extracted oils shows acid value of 1.00mg NaOH/g oil (melon oil), 10.050mg NaOH/g oil (coconut oil) and saponification value of 187.00mg/KOH (melon oil) and 183.26mg/KOH (coconut oil). The iodine value of the melon oil gave 75.00mg I2/g and 81.00mg I2/g for coconut oil. A standard statistical package Minitab version 16.0 was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to optimize the leaching process. Both samples gave high oil yield at the same optimal conditions. The optimal conditions to obtain highest oil yield ≥ 52% (melon seed) and ≥ 48% (coconut seed) are solute - solvent ratio of 40g/ml, leaching time of 2hours and leaching temperature of 50oC. The two samples studied have potential of yielding oil with melon seed giving the higher yield.Keywords: Coconut, Melon, Optimization, Processing
Procedia PDF Downloads 44230425 Self-Efficacy, Self-Knowledge, Empathy and Psychological Well-Being as Predictors of Workers’ Job Performance in Food and Beverage Industries in the South-West, Nigeria
Authors: Michael Ayodeji Boyede
Abstract:
Studies have shown that workers’ job performance is very low in Nigeria, especially in the food and beverage industry. This trend had been partially attributed to low workers’ self-efficacy, poor self-knowledge, lack of empathy and poor psychological well-being. The descriptive survey design was adopted. Four factories were purposively selected from three states in Southwestern, Nigeria (Lagos, Ogun and Oyo States). Proportionate random sampling techniques were used in selecting 1,820 junior and supervisory cadre workers in Nestle Plc (369), Coca-Cola Plc (392), Cadbury Plc (443) and Nigeria Breweries (616). The five research instruments used were: Workers’ self-efficacy (r=0.81), Workers’ self-knowledge (r=0.78), Workers’ empathy (r=0.74), Workers’ psychological well-being (r=0.70) and Workers’ performance rating (r=0.72) scales. Quantitative data were analysed using Pearson product moment correlation, Multiple regression at 0.05 level of significance. Findings show that there were significant relationships between Workers’ job performance and self-efficacy (r=.56), self-knowledge (r=.54), Empathy (r=.55) and Psychological Well-being (r=.69) respectively. Self-efficacy, self-knowledge, empathy and psychological well-being jointly predict workers’ job performance (F (4,1815) = 491.05) accounting for 52.0% of its variance. Psychological well-being (B=.52). Self-efficacy (B=.10), self-knowledge (B=.11), empathy (B=. 09) had predictive relative weights on workers’ job performance. Inadequate knowledge and training of the supervisors led to a mismatch of workers thereby reducing workers’ job performance. High self-efficacy, empathy, psychological well-being and good self-knowledge influence workers job performance in the food and beverage industry. Based on the finding employers of labour should provide work environment that would enhance and promote the development of these factors among the workers.Keywords: self-efficacy, self-knowledge, empathy, psychological well-being, job performance
Procedia PDF Downloads 26230424 Correction Requirement to AISC Design Guide 31: Case Study of Web Post Buckling Design for Castellated Beams
Authors: Kitjapat Phuvoravan, Phattaraphong Ponsorn
Abstract:
In the design of Castellated beams (CB), the web post buckling acted by horizontal shear force is one of the important failure modes that have to be considered. It is also a dominant governing mode when design following the AISC 31 design guideline which is just published. However, the equation of the web post buckling given by the guideline is still questionable for most of the engineers. So the purpose of this paper is to study and provide a proposed equation for design the web post buckling with more simplified and convenient to use. The study is also including the improper of the safety factor given by the guideline. The proposed design equation is acquired by regression method based on the results of finite element analysis. An amount of Cellular beam simulated to study is modelled by using shell element, analysis with both geometric and material nonlinearity. The results of the study show that the use of the proposed equation to design the web post buckling in Castellated beams is more simple and precise for computation than the equations provided from the guideline.Keywords: castellated beam, web opening, web post buckling, design equation
Procedia PDF Downloads 30230423 An Artificial Intelligence Framework to Forecast Air Quality
Authors: Richard Ren
Abstract:
Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms
Procedia PDF Downloads 12730422 Prediction of Physical Properties and Sound Absorption Performance of Automotive Interior Materials
Authors: Un-Hwan Park, Jun-Hyeok Heo, In-Sung Lee, Seong-Jin Cho, Tae-Hyeon Oh, Dae-Kyu Park
Abstract:
Sound absorption coefficient is considered important when designing because noise affects emotion quality of car. It is designed with lots of experiment tunings in the field because it is unreliable to predict it for multi-layer material. In this paper, we present the design of sound absorption for automotive interior material with multiple layers using estimation software of sound absorption coefficient for reverberation chamber. Additionally, we introduce the method for estimation of physical properties required to predict sound absorption coefficient of car interior materials with multiple layers too. It is calculated by inverse algorithm. It is very economical to get information about physical properties without expensive equipment. Correlation test is carried out to ensure reliability for accuracy. The data to be used for the correlation is sound absorption coefficient measured in the reverberation chamber. In this way, it is considered economical and efficient to design automotive interior materials. And design optimization for sound absorption coefficient is also easy to implement when it is designed.Keywords: sound absorption coefficient, optimization design, inverse algorithm, automotive interior material, multiple layers nonwoven, scaled reverberation chamber, sound impedance tubes
Procedia PDF Downloads 30830421 Immobilization of Lipase Enzyme by Low Cost Material: A Statistical Approach
Authors: Md. Z. Alam, Devi R. Asih, Md. N. Salleh
Abstract:
Immobilization of lipase enzyme produced from palm oil mill effluent (POME) by the activated carbon (AC) among the low cost support materials was optimized. The results indicated that immobilization of 94% was achieved by AC as the most suitable support material. A sequential optimization strategy based on a statistical experimental design, including one-factor-at-a-time (OFAT) method was used to determine the equilibrium time. Three components influencing lipase immobilization were optimized by the response surface methodology (RSM) based on the face-centered central composite design (FCCCD). On the statistical analysis of the results, the optimum enzyme concentration loading, agitation rate and carbon active dosage were found to be 30 U/ml, 300 rpm and 8 g/L respectively, with a maximum immobilization activity of 3732.9 U/g-AC after 2 hrs of immobilization. Analysis of variance (ANOVA) showed a high regression coefficient (R2) of 0.999, which indicated a satisfactory fit of the model with the experimental data. The parameters were statistically significant at p<0.05.Keywords: activated carbon, POME based lipase, immobilization, adsorption
Procedia PDF Downloads 24330420 Simulation and Performance Evaluation of Transmission Lines with Shield Wire Segmentation against Atmospheric Discharges Using ATPDraw
Authors: Marcio S. da Silva, Jose Mauricio de B. Bezerra, Antonio E. de A. Nogueira
Abstract:
This paper aims to make a performance analysis of shield wire transmission lines against atmospheric discharges when it is made the option of sectioning the shield wire and verify if the tolerability of the change. As a goal of this work, it was established to make complete modeling of a transmission line in the ATPDraw program with shield wire grounded in all the towers and in some towers. The methodology used to make the proposed evaluation was to choose an actual transmission line that served as a case study. From the choice of transmission line and verification of all its topology and materials, complete modeling of the line using the ATPDraw software was performed. Then several atmospheric discharges were simulated by striking the grounded shield wires in each tower. These simulations served to identify the behavior of the existing line against atmospheric discharges. After this first analysis, the same line was reconsidered with shield wire segmentation. The shielding wire segmentation technique aims to reduce induced losses in shield wires and is adopted in some transmission lines in Brazil. With the same conditions of atmospheric discharge the transmission line, this time with shield wire segmentation was again evaluated. The results obtained showed that it is possible to obtain similar performances against atmospheric discharges between a shield wired line in multiple towers and the same line with shield wire segmentation if some precautions are adopted as verification of the ground resistance of the wire segmented shield, adequacy of the maximum length of the segmented gap, evaluation of the separation length of the electrodes of the insulator spark, among others. As a conclusion, it is verified that since the correct assessment and adopted the correct criteria of adjustment a transmission line with shielded wire segmentation can perform very similar to the traditional use with multiple earths. This solution contributes in a very important way to the reduction of energy losses in transmission lines.Keywords: atmospheric discharges, ATPDraw, shield wire, transmission lines
Procedia PDF Downloads 16930419 Comparison of Applicability of Time Series Forecasting Models VAR, ARCH and ARMA in Management Science: Study Based on Empirical Analysis of Time Series Techniques
Authors: Muhammad Tariq, Hammad Tahir, Fawwad Mahmood Butt
Abstract:
Purpose: This study attempts to examine the best forecasting methodologies in the time series. The time series forecasting models such as VAR, ARCH and the ARMA are considered for the analysis. Methodology: The Bench Marks or the parameters such as Adjusted R square, F-stats, Durban Watson, and Direction of the roots have been critically and empirically analyzed. The empirical analysis consists of time series data of Consumer Price Index and Closing Stock Price. Findings: The results show that the VAR model performed better in comparison to other models. Both the reliability and significance of VAR model is highly appreciable. In contrary to it, the ARCH model showed very poor results for forecasting. However, the results of ARMA model appeared double standards i.e. the AR roots showed that model is stationary and that of MA roots showed that the model is invertible. Therefore, the forecasting would remain doubtful if it made on the bases of ARMA model. It has been concluded that VAR model provides best forecasting results. Practical Implications: This paper provides empirical evidences for the application of time series forecasting model. This paper therefore provides the base for the application of best time series forecasting model.Keywords: forecasting, time series, auto regression, ARCH, ARMA
Procedia PDF Downloads 34830418 Online Electric Current Based Diagnosis of Stator Faults on Squirrel Cage Induction Motors
Authors: Alejandro Paz Parra, Jose Luis Oslinger Gutierrez, Javier Olaya Ochoa
Abstract:
In the present paper, five electric current based methods to analyze electric faults on the stator of induction motors (IM) are used and compared. The analysis tries to extend the application of the multiple reference frames diagnosis technique. An eccentricity indicator is presented to improve the application of the Park’s Vector Approach technique. Most of the fault indicators are validated and some others revised, agree with the technical literatures and published results. A tri-phase 3hp squirrel cage IM, especially modified to establish different fault levels, is used for validation purposes.Keywords: motor fault diagnosis, induction motor, MCSA, ESA, Extended Park´s vector approach, multiparameter analysis
Procedia PDF Downloads 34830417 Virtual Reality and Avatars in Education
Authors: Michael Brazley
Abstract:
Virtual Reality (VR) and 3D videos are the most current generation of learning technology today. Virtual Reality and 3D videos are being used in professional offices and Schools now for marketing and education. Technology in the field of design has progress from two dimensional drawings to 3D models, using computers and sophisticated software. Virtual Reality is being used as collaborative means to allow designers and others to meet and communicate inside models or VR platforms using avatars. This research proposes to teach students from different backgrounds how to take a digital model into a 3D video, then into VR, and finally VR with multiple avatars communicating with each other in real time. The next step would be to develop the model where people from three or more different locations can meet as avatars in real time, in the same model and talk to each other. This research is longitudinal, studying the use of 3D videos in graduate design and Virtual Reality in XR (Extended Reality) courses. The research methodology is a combination of quantitative and qualitative methods. The qualitative methods begin with the literature review and case studies. The quantitative methods come by way of student’s 3D videos, survey, and Extended Reality (XR) course work. The end product is to develop a VR platform with multiple avatars being able to communicate in real time. This research is important because it will allow multiple users to remotely enter your model or VR platform from any location in the world and effectively communicate in real time. This research will lead to improved learning and training using Virtual Reality and Avatars; and is generalizable because most Colleges, Universities, and many citizens own VR equipment and computer labs. This research did produce a VR platform with multiple avatars having the ability to move and speak to each other in real time. Major implications of the research include but not limited to improved: learning, teaching, communication, marketing, designing, planning, etc. Both hardware and software played a major role in project success.Keywords: virtual reality, avatars, education, XR
Procedia PDF Downloads 98