Search results for: scientific model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18050

Search results for: scientific model

12230 Computational Fluids Dynamics Investigation of the Effect of Geometric Parameters on the Ejector Performance

Authors: Michel Wakim, Rodrigo Rivera Tinoco

Abstract:

Supersonic ejector is an economical device that use high pressure vapor to compress a low pressure vapor without any rotating parts or external power sources. Entrainment ratio is a major characteristic of the ejector performance, so the ejector performance is highly dependent on its geometry. The aim of this paper is to design ejector geometry, based on pre-specified operating conditions, and to study the flow behavior inside the ejector by using computational fluid dynamics ‘CFD’ by using ‘ANSYS FLUENT 15.0’ software. In the first section; 1-D mathematical model is carried out to predict the ejector geometry. The second part describes the flow behavior inside the designed model. CFD is the most reliable tool to reveal the mixing process at different parts of the supersonic turbulent flow and to study the effect of the geometry on the effective ejector area. Finally, the results show the effect of the geometry on the entrainment ratio.

Keywords: computational fluids dynamics, ejector, entrainment ratio, geometry optimization, performance

Procedia PDF Downloads 260
12229 Cross Analysis of Gender Discrimination in Print Media of Subcontinent via James Paul Gee Model

Authors: Luqman Shah

Abstract:

The myopic gender discrimination is now a well-documented and recognized fact. However, gender is only one facet of an individual’s multiple identities. The aim of this work is to investigate gender discrimination highlighted in print media in the subcontinent with a specific focus on Pakistan and India. In this study, an approach is adopted by using the James Paul Gee model for the identification of gender discrimination. As a matter of fact, gender discrimination is not consistent in its nature and intensity across global societies and varies as social, geographical, and cultural background change. The World has been changed enormously in every aspect of life, and there are also obvious changes towards gender discrimination, prejudices, and biases, but still, the world has a long way to go to recognize women as equal as men in every sphere of life. The history of the world is full of gender-based incidents and violence. Now the time came that this issue must be seriously addressed and to eradicate this evil, which will lead to harmonize society and consequently heading towards peace and prosperity. The study was carried out by a mixed model research method. The data was extracted from the contents of five Pakistani English newspapers out of a total of 23 daily English newspapers, and likewise, five Indian daily English newspapers out of 52 those were published 2018-2019. Two news stories from each of these newspapers, in total, twenty news stories were taken as sampling for this research. Content and semiotic analysis techniques were used to analyze through James Paul Gee's seven building tasks of language. The resources of renowned e-papers are utilized, and the highlighted cases in Pakistani newspapers of Indian gender-based stories and vice versa are scrutinized as per the requirement of this research paper. For analysis of the written stretches of discourse taken from e-papers and processing of data for the focused problem, James Paul Gee 'Seven Building Tasks of Language' is used. Tabulation of findings is carried to pinpoint the issue with certainty. Findings after processing the data showed that there is a gross human rights violation on the basis of gender discrimination. The print media needs a more realistic representation of what is what not what seems to be. The study recommends the equality and parity of genders.

Keywords: gender discrimination, print media, Paul Gee model, subcontinent

Procedia PDF Downloads 201
12228 Structure of Tourists’ Shopping Behavior: From the Tyranny of Hotels to Public Markets

Authors: Asmaa M. Marzouk, Abdallah M. Elshaer

Abstract:

Despite the well-recognized value of shopping as a revenue-generating resource, little effort was made to investigate what is the structure of tourists’ shopping behavior, which in turn, affect their travel experience. The purpose of this paper is to study the structure of tourists’ shopping process to better understand their shopping behavior by investigating factors that influence this activity other than hotels tyranny. This study specifically aims to propose a model incorporating those all variables. This empirical study investigates the shopping experience of international tourists using a questionnaire aimed to examine multinational samples selected from the tourist population visiting a specific destination in Egypt. This study highlights the various stakeholders that make tourists do shop independent of hotels. The results, therefore, demonstrate the relationship between the shopping process entities involved and configure the variables within the model in a way that provides a viable solution for visitors to avoid the tyranny of hotel facilities and amenities on the public markets.

Keywords: hotels’ amenities, shopping process, tourist behavior, tourist satisfaction

Procedia PDF Downloads 115
12227 Regression Model Evaluation on Depth Camera Data for Gaze Estimation

Authors: James Purnama, Riri Fitri Sari

Abstract:

We investigate the machine learning algorithm selection problem in the term of a depth image based eye gaze estimation, with respect to its essential difficulty in reducing the number of required training samples and duration time of training. Statistics based prediction accuracy are increasingly used to assess and evaluate prediction or estimation in gaze estimation. This article evaluates Root Mean Squared Error (RMSE) and R-Squared statistical analysis to assess machine learning methods on depth camera data for gaze estimation. There are 4 machines learning methods have been evaluated: Random Forest Regression, Regression Tree, Support Vector Machine (SVM), and Linear Regression. The experiment results show that the Random Forest Regression has the lowest RMSE and the highest R-Squared, which means that it is the best among other methods.

Keywords: gaze estimation, gaze tracking, eye tracking, kinect, regression model, orange python

Procedia PDF Downloads 523
12226 Numerical Approach of RC Structural MembersExposed to Fire and After-Cooling Analysis

Authors: Ju-young Hwang, Hyo-Gyoung Kwak, Hong Jae Yim

Abstract:

This paper introduces a numerical analysis method for reinforced-concrete (RC) structures exposed to fire and compares the result with experimental results. The proposed analysis method for RC structure under the high temperature consists of two procedures. First step is to decide the temperature distribution across the section through the heat transfer analysis by using the time-temperature curve. After determination of the temperature distribution, the nonlinear analysis is followed. By considering material and geometrical non-linearity with the temperature distribution, nonlinear analysis predicts the behavior of RC structure under the fire by the exposed time. The proposed method is validated by the comparison with the experimental results. Finally, Prediction model to describe the status of after-cooling concrete can also be introduced based on the results of additional experiment. The product of this study is expected to be embedded for smart structure monitoring system against fire in u-City.

Keywords: RC structures, heat transfer analysis, nonlinear analysis, after-cooling concrete model

Procedia PDF Downloads 351
12225 A Review on Medical Image Registration Techniques

Authors: Shadrack Mambo, Karim Djouani, Yskandar Hamam, Barend van Wyk, Patrick Siarry

Abstract:

This paper discusses the current trends in medical image registration techniques and addresses the need to provide a solid theoretical foundation for research endeavours. Methodological analysis and synthesis of quality literature was done, providing a platform for developing a good foundation for research study in this field which is crucial in understanding the existing levels of knowledge. Research on medical image registration techniques assists clinical and medical practitioners in diagnosis of tumours and lesion in anatomical organs, thereby enhancing fast and accurate curative treatment of patients. Literature review aims to provide a solid theoretical foundation for research endeavours in image registration techniques. Developing a solid foundation for a research study is possible through a methodological analysis and synthesis of existing contributions. Out of these considerations, the aim of this paper is to enhance the scientific community’s understanding of the current status of research in medical image registration techniques and also communicate to them, the contribution of this research in the field of image processing. The gaps identified in current techniques can be closed by use of artificial neural networks that form learning systems designed to minimise error function. The paper also suggests several areas of future research in the image registration.

Keywords: image registration techniques, medical images, neural networks, optimisaztion, transformation

Procedia PDF Downloads 161
12224 On Energy Condition Violation for Shifting Negative Mass Black Holes

Authors: Manuel Urueña Palomo

Abstract:

In this paper, we introduce the study of a new solution to gravitational singularities by violating the energy conditions of the Penrose Hawking singularity theorems. We consider that a shift to negative energies, and thus, to negative masses, takes place at the event horizon of a black hole, justified by the original, singular and exact Schwarzschild solution. These negative energies are supported by relativistic particle physics considering the negative energy solutions of the Dirac equation, which states that a time transformation shifts to a negative energy particle. In either general relativity or full Newtonian mechanics, these negative masses are predicted to be repulsive. It is demonstrated that the model fits actual observations, and could possibly clarify the size of observed and unexplained supermassive black holes, when considering the inflation that would take place inside the event horizon where massive particles interact antigravitationally. An approximated solution of the model proposed could be simulated in order to compare it with these observations.

Keywords: black holes, CPT symmetry, negative mass, time transformation

Procedia PDF Downloads 130
12223 Thermodynamics of Stable Micro Black Holes Production by Modeling from the LHC

Authors: Aref Yazdani, Ali Tofighi

Abstract:

We study a simulative model for production of stable micro black holes based on investigation on thermodynamics of LHC experiment. We show that how this production can be achieved through a thermodynamic process of stability. Indeed, this process can be done through a very small amount of powerful fuel. By applying the second law of black hole thermodynamics at the scale of quantum gravity and perturbation expansion of the given entropy function, a time-dependent potential function is obtained which is illustrated with exact numerical values in higher dimensions. Seeking for the conditions for stability of micro black holes is another purpose of this study. This is proven through an injection method of putting the exact amount of energy into the final phase of the production which is equivalent to the same energy injection into the center of collision at the LHC in order to stabilize the produced particles. Injection of energy into the center of collision at the LHC is a new pattern that it is worth a try for the first time.

Keywords: micro black holes, LHC experiment, black holes thermodynamics, extra dimensions model

Procedia PDF Downloads 129
12222 The Validation and Reliability of the Arabic Effort-Reward Imbalance Model Questionnaire: A Cross-Sectional Study among University Students in Jordan

Authors: Mahmoud M. AbuAlSamen, Tamam El-Elimat

Abstract:

Amid the economic crisis in Jordan, the Jordanian government has opted for a knowledge economy where education is promoted as a mean for economic development. University education usually comes at the expense of study-related stress that may adversely impact the health of students. Since stress is a latent variable that is difficult to measure, a valid tool should be used in doing so. The effort-reward imbalance (ERI) is a model used as a measurement tool for occupational stress. The model was built on the notion of reciprocity, which relates ‘effort’ to ‘reward’ through the mediating ‘over-commitment’. Reciprocity assumes equilibrium between both effort and reward, where ‘high’ effort is adequately compensated with ‘high’ reward. When this equilibrium is violated (i.e., high effort with low reward), this may elicit negative emotions and stress, which have been correlated to adverse health conditions. The theory of ERI was established in many different parts of the world, and associations with chronic diseases and the health of workers were explored at length. While much of the effort-reward imbalance was investigated in work conditions, there has been a growing interest in understanding the validity of the ERI model when applied to other social settings such as schools and universities. The ERI questionnaire was developed in Arabic recently to measure ERI among high school teachers. However, little information is available on the validity of the ERI questionnaire in university students. A cross-sectional study was conducted on 833 students in Jordan to measure the validity and reliability of the ERI questionnaire in Arabic among university students. Reliability, as measured by Cronbach’s alpha of the effort, reward, and overcommitment scales, was 0.73, 0.76, and 0.69, respectively, suggesting satisfactory reliability. The factorial structure was explored using principal axis factoring. The results fitted a five-solution model where both the effort and overcommitment were uni-dimensional while the reward scale was three-dimensional with its factors, namely being ‘support’, ‘esteem’, and ‘security’. The solution explained 56% of the variance in the data. The established ERI theory was replicated with excellent validity in this study. The effort-reward ratio in university students was 1.19, which suggests a slight degree of failed reciprocity. The study also investigated the association of effort, reward, overcommitment, and ERI with participants’ demographic factors and self-reported health. ERI was found to be significantly associated with absenteeism (p < 0.0001), past history of failed courses (p=0.03), and poor academic performance (p < 0.001). Moreover, ERI was found to be associated with poor self-reported health among university students (p=0.01). In conclusion, the Arabic ERI questionnaire is reliable and valid for use in measuring effort-reward imbalance in university students in Jordan. The results of this research are important in informing higher education policy in Jordan.

Keywords: effort-reward imbalance, factor analysis, validity, self-reported health

Procedia PDF Downloads 100
12221 Why and When to Teach Definitions: Necessary and Unnecessary Discontinuities Resulting from the Definition of Mathematical Concepts

Authors: Josephine Shamash, Stuart Smith

Abstract:

We examine reasons for introducing definitions in teaching mathematics in a number of different cases. We try to determine if, where, and when to provide a definition, and which definition to choose. We characterize different types of definitions and the different purposes we may have for formulating them, and detail examples of each type. Giving a definition at a certain stage can sometimes be detrimental to the development of the concept image. In such a case, it is advisable to delay the precise definition to a later stage. We describe two models, the 'successive approximation model', and the 'model of the extending definition' that fit such situations. Detailed examples that fit the different models are given based on material taken from a number of textbooks, and analysis of the way the concept is introduced, and where and how its definition is given. Our conclusions, based on this analysis, is that some of the definitions given may cause discontinuities in the learning sequence and constitute obstacles and unnecessary cognitive conflicts in the formation of the concept definition. However, in other cases, the discontinuity in passing from definition to definition actually serves a didactic purpose, is unavoidable for the mathematical evolution of the concept image, and is essential for students to deepen their understanding.

Keywords: concept image, mathematical definitions, mathematics education, mathematics teaching

Procedia PDF Downloads 113
12220 Weighted Data Replication Strategy for Data Grid Considering Economic Approach

Authors: N. Mansouri, A. Asadi

Abstract:

Data Grid is a geographically distributed environment that deals with data intensive application in scientific and enterprise computing. Data replication is a common method used to achieve efficient and fault-tolerant data access in Grids. In this paper, a dynamic data replication strategy, called Enhanced Latest Access Largest Weight (ELALW) is proposed. This strategy is an enhanced version of Latest Access Largest Weight strategy. However, replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement task. ELALW replaces replicas based on the number of requests in future, the size of the replica, and the number of copies of the file. It also improves access latency by selecting the best replica when various sites hold replicas. The proposed replica selection selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. Simulation results utilizing the OptorSim show our replication strategy achieve better performance overall than other strategies in terms of job execution time, effective network usage and storage resource usage.

Keywords: data grid, data replication, simulation, replica selection, replica placement

Procedia PDF Downloads 245
12219 Neuro-Fuzzy Based Model for Phrase Level Emotion Understanding

Authors: Vadivel Ayyasamy

Abstract:

The present approach deals with the identification of Emotions and classification of Emotional patterns at Phrase-level with respect to Positive and Negative Orientation. The proposed approach considers emotion triggered terms, its co-occurrence terms and also associated sentences for recognizing emotions. The proposed approach uses Part of Speech Tagging and Emotion Actifiers for classification. Here sentence patterns are broken into phrases and Neuro-Fuzzy model is used to classify which results in 16 patterns of emotional phrases. Suitable intensities are assigned for capturing the degree of emotion contents that exist in semantics of patterns. These emotional phrases are assigned weights which supports in deciding the Positive and Negative Orientation of emotions. The approach uses web documents for experimental purpose and the proposed classification approach performs well and achieves good F-Scores.

Keywords: emotions, sentences, phrases, classification, patterns, fuzzy, positive orientation, negative orientation

Procedia PDF Downloads 360
12218 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encyption Scheme

Authors: Victor Onomza Waziri, John K. Alhassan, Idris Ismaila, Noel Dogonyara

Abstract:

This paper describes the problem of building secure computational services for encrypted information in the Cloud. Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy or confidentiality, availability and integrity of the data and user’s security. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory that is derivable from abstract algebra which can easily be integrated and leveraged in the Cloud computing interface with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based on cryptographic security algorithm.

Keywords: big data analytics, security, privacy, bootstrapping, Fully Homomorphic Encryption Scheme

Procedia PDF Downloads 461
12217 Using Greywolf Optimized Machine Learning Algorithms to Improve Accuracy for Predicting Hospital Readmission for Diabetes

Authors: Vincent Liu

Abstract:

Machine learning algorithms (ML) can achieve high accuracy in predicting outcomes compared to classical models. Metaheuristic, nature-inspired algorithms can enhance traditional ML algorithms by optimizing them such as by performing feature selection. We compare ten ML algorithms to predict 30-day hospital readmission rates for diabetes patients in the US using a dataset from UCI Machine Learning Repository with feature selection performed by Greywolf nature-inspired algorithm. The baseline accuracy for the initial random forest model was 65%. After performing feature engineering, SMOTE for class balancing, and Greywolf optimization, the machine learning algorithms showed better metrics, including F1 scores, accuracy, and confusion matrix with improvements ranging in 10%-30%, and a best model of XGBoost with an accuracy of 95%. Applying machine learning this way can improve patient outcomes as unnecessary rehospitalizations can be prevented by focusing on patients that are at a higher risk of readmission.

Keywords: diabetes, machine learning, 30-day readmission, metaheuristic

Procedia PDF Downloads 35
12216 Towards Automatic Calibration of In-Line Machine Processes

Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales

Abstract:

In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820

Keywords: data model, machine learning, industrial winding, calibration

Procedia PDF Downloads 224
12215 A Large Dataset Imputation Approach Applied to Country Conflict Prediction Data

Authors: Benjamin Leiby, Darryl Ahner

Abstract:

This study demonstrates an alternative stochastic imputation approach for large datasets when preferred commercial packages struggle to iterate due to numerical problems. A large country conflict dataset motivates the search to impute missing values well over a common threshold of 20% missingness. The methodology capitalizes on correlation while using model residuals to provide the uncertainty in estimating unknown values. Examination of the methodology provides insight toward choosing linear or nonlinear modeling terms. Static tolerances common in most packages are replaced with tailorable tolerances that exploit residuals to fit each data element. The methodology evaluation includes observing computation time, model fit, and the comparison of known values to replaced values created through imputation. Overall, the country conflict dataset illustrates promise with modeling first-order interactions while presenting a need for further refinement that mimics predictive mean matching.

Keywords: correlation, country conflict, imputation, stochastic regression

Procedia PDF Downloads 104
12214 A Simulated Evaluation of Model Predictive Control

Authors: Ahmed AlNouss, Salim Ahmed

Abstract:

Process control refers to the techniques to control the variables in a process in order to maintain them at their desired values. Advanced process control (APC) is a broad term within the domain of control where it refers to different kinds of process control and control related tools, for example, model predictive control (MPC), statistical process control (SPC), fault detection and classification (FDC) and performance assessment. APC is often used for solving multivariable control problems and model predictive control (MPC) is one of only a few advanced control methods used successfully in industrial control applications. Advanced control is expected to bring many benefits to the plant operation; however, the extent of the benefits is plant specific and the application needs a large investment. This requires an analysis of the expected benefits before the implementation of the control. In a real plant simulation studies are carried out along with some experimentation to determine the improvement in the performance of the plant due to advanced control. In this research, such an exercise is undertaken to realize the needs of APC application. The main objectives of the paper are as follows: (1) To apply MPC to a number of simulations set up to realize the need of MPC by comparing its performance with that of proportional integral derivatives (PID) controllers. (2) To study the effect of controller parameters on control performance. (3) To develop appropriate performance index (PI) to compare the performance of different controller and develop novel idea to present tuning map of a controller. These objectives were achieved by applying PID controller and a special type of MPC which is dynamic matrix control (DMC) on the multi-tanks process simulated in loop-pro. Then the controller performance has been evaluated by changing the controller parameters. This performance was based on special indices related to the difference between set point and process variable in order to compare the both controllers. The same principle was applied for continuous stirred tank heater (CSTH) and continuous stirred tank reactor (CSTR) processes simulated in Matlab. However, in these processes some developed programs were written to evaluate the performance of the PID and MPC controllers. Finally these performance indices along with their controller parameters were plotted using special program called Sigmaplot. As a result, the improvement in the performance of the control loops was quantified using relevant indices to justify the need and importance of advanced process control. Also, it has been approved that, by using appropriate indices, predictive controller can improve the performance of the control loop significantly.

Keywords: advanced process control (APC), control loop, model predictive control (MPC), proportional integral derivatives (PID), performance indices (PI)

Procedia PDF Downloads 393
12213 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion

Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan

Abstract:

In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.

Keywords: accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion

Procedia PDF Downloads 203
12212 The Link between Money Market and Economic Growth in Nigeria: Vector Error Correction Model Approach

Authors: Uyi Kizito Ehigiamusoe

Abstract:

The paper examines the impact of money market on economic growth in Nigeria using data for the period 1980-2012. Econometrics techniques such as Ordinary Least Squares Method, Johanson’s Co-integration Test and Vector Error Correction Model were used to examine both the long-run and short-run relationship. Evidence from the study suggest that though a long-run relationship exists between money market and economic growth, but the present state of the Nigerian money market is significantly and negatively related to economic growth. The link between the money market and the real sector of the economy remains very weak. This implies that the market is not yet developed enough to produce the needed growth that will propel the Nigerian economy because of several challenges. It was therefore recommended that government should create the appropriate macroeconomic policies, legal framework and sustain the present reforms with a view to developing the market so as to promote productive activities, investments, and ultimately economic growth.

Keywords: economic growth, investments, money market, money market challenges, money market instruments

Procedia PDF Downloads 327
12211 Hepatic Regenerative Capacity after Acetaminophen-Induced Liver Injury in Mouse Model

Authors: N. F. Hamid, A. Kipar, J. Stewart, D. J. Antoine, B. K. Park, D. P. Williams

Abstract:

Acetaminophen (APAP) is a widely used analgesic that is safe at therapeutic doses. The mouse model of APAP has been extensively used for studies on pathogenesis and intervention of drug induced liver injury based on the CytP450 mediated formation of N-acetyl-p-benzo-quinoneimine and, more recently, as model for mechanism based biomarkers. Delay of the fasted CD1 mice to rebound to the basal level of hepatic GSH compare to fed mice is reported in this study. Histologically, 15 hours fasted mice prior to APAP treatment leading to overall more intense cell loss with no evidence of apoptosis as compared to non-fasted mice, where the apoptotic cells were clearly seen on cleaved caspase-3 immunostaining. After 15 hours post APAP administration, hepatocytes underwent stage of recovery with evidence of mitotic figures in fed mice and return to completely no histological difference to control at 24 hours. On the contrary, the evidence of ongoing cells damage and inflammatory cells infiltration are still present on fasted mice until the end of the study. To further measure the regenerative capacity of the hepatocytes, the inflammatory mediators of cytokines that involved in the progression or regression of the toxicity like TNF-α and IL-6 in liver and spleen using RT-qPCR were also included. Yet, quantification of proliferating cell nuclear antigen (PCNA) has demonstrated the time for hepatic regenerative in fasted is longer than that to fed mice. Together, these data would probably confirm that fasting prior to APAP treatment does not only modulate liver injury, but could have further effects to delay subsequent regeneration of the hepatocytes.

Keywords: acetaminophen, liver, proliferating cell nuclear antigen, regeneration, apoptosis

Procedia PDF Downloads 410
12210 The Dark Side of Tourism's Implications: A Structural Equation Modeling Study of the 2016 Earthquake in Central Italy

Authors: B. Kulaga, A. Cinti, F. J. Mazzocchini

Abstract:

Despite the fact that growing academic attention on dark tourism is a fairly recent phenomenon, among the various reasons for travelling death-related ones, are very ancient. Furthermore, the darker side of human nature has always been fascinated and curious regarding death, or at least, man has always tried to learn lessons from death. This study proposes to describe the phenomenon of dark tourism related to the 2016 earthquake in Central Italy, deadly for 302 people and highly destructive for the rural areas of Lazio, Marche, and Umbria Regions. The primary objective is to examine the motivation-experience relationship in a dark tourism site, using the structural equation model, applied for the first time to a dark tourism research in 2016, in a study conducted after the Beichuan earthquake. The findings of the current study are derived from the calculations conducted on primary data compiled from 350 tourists in the areas mostly affected by the 2016 earthquake, including the town of Amatrice, near the epicenter, Castelluccio, Norcia, Ussita and Visso, through conducting a Likert scale survey. Furthermore, we use the structural equation model to examine the motivation behind dark travel and how this experience can influence the motivation and emotional reaction of tourists. Expected findings are in line with the previous study mentioned above, indicating that: not all tourists visit the thanatourism sites for dark tourism purpose, tourists’ emotional reactions influence more heavily the emotional tourist experience than cognitive experiences do, and curious visitors are likely to engage cognitively by learning about the incident or related issues.

Keywords: dark tourism, emotional reaction, experience, motivation, structural equation model

Procedia PDF Downloads 127
12209 Mending Broken Fences Policing: Developing the Intelligence-Led/Community-Based Policing Model(IP-CP) and Quality/Quantity/Crime(QQC) Model

Authors: Anil Anand

Abstract:

Despite enormous strides made during the past decade, particularly with the adoption and expansion of community policing, there remains much that police leaders can do to improve police-public relations. The urgency is particularly evident in cities across the United States and Europe where an increasing number of police interactions over the past few years have ignited large, sometimes even national, protests against police policy and strategy, highlighting a gap between what police leaders feel they have archived in terms of public satisfaction, support, and legitimacy and the perception of bias among many marginalized communities. The decision on which one policing strategy is chosen over another, how many resources are allocated, and how strenuously the policy is applied resides primarily with the police and the units and subunits tasked with its enforcement. The scope and opportunity for police officers in impacting social attitudes and social policy are important elements that cannot be overstated. How do police leaders, for instance, decide when to apply one strategy—say community-based policing—over another, like intelligence-led policing? How do police leaders measure performance and success? Should these measures be based on quantitative preferences over qualitative, or should the preference be based on some other criteria? And how do police leaders define, allow, and control discretionary decision-making? Mending Broken Fences Policing provides police and security services leaders with a model based on social cohesion, that incorporates intelligence-led and community policing (IP-CP), supplemented by a quality/quantity/crime (QQC) framework to provide a four-step process for the articulable application of police intervention, performance measurement, and application of discretion.

Keywords: social cohesion, quantitative performance measurement, qualitative performance measurement, sustainable leadership

Procedia PDF Downloads 278
12208 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks

Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez

Abstract:

Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.

Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning

Procedia PDF Downloads 324
12207 Parkinson’s Disease Hand-Eye Coordination and Dexterity Evaluation System

Authors: Wann-Yun Shieh, Chin-Man Wang, Ya-Cheng Shieh

Abstract:

This study aims to develop an objective scoring system to evaluate hand-eye coordination and hand dexterity for Parkinson’s disease. This system contains three boards, and each of them is implemented with the sensors to sense a user’s finger operations. The operations include the peg test, the block test, and the blind block test. A user has to use the vision, hearing, and tactile abilities to finish these operations, and the board will record the results automatically. These results can help the physicians to evaluate a user’s reaction, coordination, dexterity function. The results will be collected to a cloud database for further analysis and statistics. A researcher can use this system to obtain systematic, graphic reports for an individual or a group of users. Particularly, a deep learning model is developed to learn the features of the data from different users. This model will help the physicians to assess the Parkinson’s disease symptoms by a more intellective algorithm.

Keywords: deep learning, hand-eye coordination, reaction, hand dexterity

Procedia PDF Downloads 48
12206 Life in Bequia in the Era of Climate Change: Societal Perception of Adaptation and Vulnerability

Authors: Sherry Ann Ganase, Sandra Sookram

Abstract:

This study examines adaptation measures and factors that influence adaptation decisions in Bequia by using multiple linear regression and a structural equation model. Using survey data, the results suggest that households are knowledgeable and concerned about climate change but lack knowledge about the measures needed to adapt. The findings from the SEM suggest that a positive relationship exist between vulnerability and adaptation, vulnerability and perception, along with a negative relationship between perception and adaptation. This suggests that being aware of the terms associated with climate change and knowledge about climate change is insufficient for implementing adaptation measures; instead the risk and importance placed on climate change, vulnerability experienced with household flooding, drainage and expected threat of future sea level are the main factors that influence the adaptation decision. The results obtained in this study are beneficial to all as adaptation requires a collective effort by stakeholders.

Keywords: adaptation, Bequia, multiple linear regression, structural equation model

Procedia PDF Downloads 444
12205 Corporate Governance and Firm Performance: Empirical Evidence from India

Authors: G. C. Surya Bahadur, Ranjana Kothari

Abstract:

The paper attempts to analyze linkages between corporate governance and firm performance in India. The study employs a panel data of 50 Nifty companies from 2008 to 2012. Using LSDV panel data model and 2SLS model the study reveals that that good corporate governance practices adopted by companies is positively related with financial performance. Board independence, number of board committees and executive compensation are found to have positive relationship while ownership by promoters and financial leverage have negative relationship with performance. There is existence of bi-directional relationship between corporate governance and financial performance. Companies with sound financial performance are more likely to conform to corporate governance norms and standards and implement sound corporate governance system. The findings indicate that companies can enhance business performance and sustainability by embracing sound corporate governance practices.

Keywords: board structure, corporate governance, executive compensation, ownership structure

Procedia PDF Downloads 456
12204 Leadership Strategies in Social Enterprises through Reverse Accountability: Analysis of Social Control for Pragmatic Organizational Design

Authors: Ananya Rajagopal

Abstract:

The study is based on an analysis of qualitative data used to analyze the business performance of entrepreneurs in emerging markets based on core variables such as collective leadership in reference to social entrepreneurship and reverse accountability attributes of stakeholders. In-depth interviews were conducted with 25 emerging enterprises within Mexico across five industrial segments. The study has been conducted focusing on five major research questions, which helped in developing the grounded theory related to reverser accountability. The results of the study revealed that the traditional entrepreneurship model based on an individualistic leadership style is being replaced by a collective leadership model. The study focuses on the leadership styles within social enterprises aimed at enhancing managerial capabilities and competencies, stakeholder values, and entrepreneurial growth. The theoretical motivation of this study has been derived from stakeholder theory and agency theory.

Keywords: reverse accountability, social enterprises, collective leadership, grounded theory, social governance

Procedia PDF Downloads 102
12203 Atomic Clusters: A Unique Building Motif for Future Smart Nanomaterials

Authors: Debesh R. Roy

Abstract:

The fundamental issue in understanding the origin and growth mechanism of nanomaterials, from a fundamental unit is a big challenging problem to the scientists. Recently, an immense attention is generated to the researchers for prediction of exceptionally stable atomic cluster units as the building units for future smart materials. The present study is a systematic investigation on the stability and electronic properties of a series of bimetallic (semiconductor-alkaline earth) clusters, viz., BxMg3 (x=1-5) is performed, in search for exceptional and/ or unusual stable motifs. A very popular hybrid exchange-correlation functional, B3LYP as proposed by A. D. Becke along with a higher basis set, viz., 6-31+G[d,p] is employed for this purpose under the density functional formalism. The magic stability among the concerned clusters is explained using the jellium model. It is evident from the present study that the magic stability of B4Mg3 cluster arises due to the jellium shell closure.

Keywords: atomic clusters, density functional theory, jellium model, magic clusters, smart nanomaterials

Procedia PDF Downloads 511
12202 Second Order MIMO Sliding Mode Controller for Nonlinear Modeled Wind Turbine

Authors: Alireza Toloei, Ahmad R. Saffary, Reza Ghasemi

Abstract:

Due to the growing need for energy and limited fossil resources, the use of renewable energy, particularly wind is strongly favored. We all wind energy can’t be saved. Betz law, 59% of the total kinetic energy of the wind turbine is extracting. Therefore turbine control to achieve maximum performance and maintain stable conditions seem necessary. In this article, we plan for a horizontal axis wind turbine variable-speed variable-pitch nonlinear controller to obtain maximum output power. The model presented in this article, including a wide range of wind turbines are horizontal axis. However, the parameters used in this model is from Vestas V29 225 kW wind turbine. We designed second order sliding mode controller, which was robust in the face of changes in wind speed and it eliminated chattering by using of super twisting algorithm. Finally, using MATLAB software to simulate the results we considered the accuracy of the simulation results.

Keywords: non linear controller, robust, sliding mode, kinetic energy

Procedia PDF Downloads 479
12201 ATM Location Problem and Cash Management in ATM's

Authors: M. Erol Genevois, D. Celik, H. Z. Ulukan

Abstract:

Automated teller machines (ATMs) can be considered among one of the most important service facilities in the banking industry. The investment in ATMs and the impact on the banking industry is growing steadily in every part of the world. The banks take into consideration many factors like safety, convenience, visibility, cost in order to determine the optimum locations of ATMs. Today, ATMs are not only available in bank branches but also at retail locations. Another important factor is the cash management in ATMs. A cash demand model for every ATM is needed in order to have an efficient cash management system. This forecasting model is based on historical cash demand data which is highly related to the ATMs location. So, the location and the cash management problem should be considered together. Although the literature survey on facility location models is quite large, it is surprising that there are only few studies which handle together ATMs location and cash management problem. In order to fulfill the gap, this paper provides a general review on studies, efforts and development in ATMs location and cash management problem.

Keywords: ATM location problem, cash management problem, ATM cash replenishment problem, literature review in ATMs

Procedia PDF Downloads 465