Search results for: Vector Error Correction Model (VECM)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18861

Search results for: Vector Error Correction Model (VECM)

15981 Bio-Oil Compounds Sorption Enhanced Steam Reforming

Authors: Esther Acha, Jose Cambra, De Chen

Abstract:

Hydrogen is considered an important energy vector for the 21st century. Nowadays there are some difficulties for hydrogen economy implantation, and one of them is the high purity required for hydrogen. This energy vector is still being mainly produced from fuels, from wich hydrogen is produced as a component of a mixture containing other gases, such as CO, CO2 and H2O. A forthcoming sustainable pathway for hydrogen is steam-reforming of bio-oils derived from biomass, e.g. via fast pyrolysis. Bio-oils are a mixture of acids, alcohols, aldehydes, esters, ketones, sugars phenols, guaiacols, syringols, furans, multi-functional compounds and also up to a 30 wt% of water. The sorption enhanced steam reforming (SESR) process is attracting a great deal of attention due to the fact that it combines both hydrogen production and CO2 separation. In the SESR process, carbon dioxide is captured by an in situ sorbent, which shifts the reversible reforming and water gas shift reactions to the product side, beyond their conventional thermodynamic limits, giving rise to a higher hydrogen production and lower cost. The hydrogen containing mixture has been obtained from the SESR of bio-oil type compounds. Different types of catalysts have been tested. All of them contain Ni at around a 30 wt %. Two samples have been prepared with the wet impregnation technique over conventional (gamma alumina) and non-conventional (olivine) supports. And a third catalysts has been prepared over a hydrotalcite-like material (HT). The employed sorbent is a commercial dolomite. The activity tests were performed in a bench-scale plant (PID Eng&Tech), using a stainless steel fixed bed reactor. The catalysts were reduced in situ in the reactor, before the activity tests. The effluent stream was cooled down, thus condensed liquid was collected and weighed, and the gas phase was analysed online by a microGC. The hydrogen yield, and process behavior was analysed without the sorbent (the traditional SR where a second purification step will be needed but that operates in steady state) and the SESR (where the purification step could be avoided but that operates in batch state). The influence of the support type and preparation method will be observed in the produced hydrogen yield. Additionally, the stability of the catalysts is critical, due to the fact that in SESR process sorption-desorption steps are required. The produced hydrogen yield and hydrogen purity has to be high and also stable, even after several sorption-desorption cycles. The prepared catalysts were characterized employing different techniques to determine the physicochemical properties of the fresh-reduced and used (after the activity tests) materials. The characterization results, together with the activity results show the influence of the catalysts preparation method, calcination temperature, or can even explain the observed yield and conversion.

Keywords: CO2 sorbent, enhanced steam reforming, hydrogen

Procedia PDF Downloads 581
15980 Using TRACE, PARCS, and SNAP Codes to Analyze the Load Rejection Transient of ABWR

Authors: J. R. Wang, H. C. Chang, A. L. Ho, J. H. Yang, S. W. Chen, C. Shih

Abstract:

The purpose of the study is to analyze the load rejection transient of ABWR by using TRACE, PARCS, and SNAP codes. This study has some steps. First, using TRACE, PARCS, and SNAP codes establish the model of ABWR. Second, the key parameters are identified to refine the TRACE/PARCS/SNAP model further in the frame of a steady state analysis. Third, the TRACE/PARCS/SNAP model is used to perform the load rejection transient analysis. Finally, the FSAR data are used to compare with the analysis results. The results of TRACE/PARCS are consistent with the FSAR data for the important parameters. It indicates that the TRACE/PARCS/SNAP model of ABWR has a good accuracy in the load rejection transient.

Keywords: ABWR, TRACE, PARCS, SNAP

Procedia PDF Downloads 199
15979 A Dynamic Model for Assessing the Advanced Glycation End Product Formation in Diabetes

Authors: Victor Arokia Doss, Kuberapandian Dharaniyambigai, K. Julia Rose Mary

Abstract:

Advanced Glycation End (AGE) products are the end products due to the reaction between excess reducing sugar present in diabetes and free amino group in protein lipids and nucleic acids. Thus, non-enzymic glycation of molecules such as hemoglobin, collagen, and other structurally and functionally important proteins add to the pathogenic complications such as diabetic retinopathy, neuropathy, nephropathy, vascular changes, atherosclerosis, Alzheimer's disease, rheumatoid arthritis, and chronic heart failure. The most common non-cross linking AGE, carboxymethyl lysine (CML) is formed by the oxidative breakdown of fructosyllysine, which is a product of glucose and lysine. CML is formed in a wide variety of tissues and is an index to assess the extent of glycoxidative damage. Thus we have constructed a mathematical and computational model that predicts the effect of temperature differences in vivo, on the formation of CML, which is now being considered as an important intracellular milieu. This hybrid model that had been tested for its parameter fitting and its sensitivity with available experimental data paves the way for designing novel laboratory experiments that would throw more light on the pathological formation of AGE adducts and in the pathophysiology of diabetic complications.

Keywords: advanced glycation end-products, CML, mathematical model, computational model

Procedia PDF Downloads 131
15978 Measuring Banking Risk

Authors: Mike Tsionas

Abstract:

The paper develops new indices of financial stability based on an explicit model of expected utility maximization by financial institutions subject to the classical technology restrictions of neoclassical production theory. The model can be estimated using standard econometric techniques, like GMM for dynamic panel data and latent factor analysis for the estimation of co-variance matrices. An explicit functional form for the utility function is not needed and we show how measures of risk aversion and prudence (downside risk aversion) can be derived and estimated from the model. The model is estimated using data for Eurozone countries and we focus particularly on (i) the use of the modeling approach as an “early warning mechanism”, (ii) the bank- and country-specific estimates of risk aversion and prudence (downside risk aversion), and (iii) the derivation of a generalized measure of risk that relies on loan-price uncertainty.

Keywords: financial stability, banking, expected utility maximization, sub-prime crisis, financial crisis, eurozone, PIIGS

Procedia PDF Downloads 352
15977 The Application of the Biopsychosocial-Spiritual Model to the Quality of Life of People Living with Sickle Cell Disease

Authors: Anita Paddy, Millicent Obodai, Lebbaeus Asamani

Abstract:

The management of sickle cell disease requires a multidisciplinary team for better outcomes. Thus, literature on the application of the biopsychosocial model for the management and explanation of chronic pain in sickle cell disease (SCD) and other chronic diseases abound. However, there is limited research on the use of the biopsychosocial model, together with a spiritual component (biopsychosocial-spiritual model). The study investigated the extent to which healthcare providers utilized the biopsychosocial-spiritual model in the management of chronic pain to improve the quality of life (QoL) of patients with SCD. This study employed the descriptive survey design involving a consecutive sampling of 261 patients with SCD who were between the ages of 18 to 79 years and were accessing hematological services at the Clinical Genetics Department of the Korle Bu Teaching Hospital. These patients willingly consented to participate in the study by appending their signatures. The theory of integrated quality of life, the gate control theory of pain and the biopsychosocial(spiritual) model were tested. An instrument for the biopsychosocial-spiritual model was developed, with a basis from the literature reviewed, while the World Health Organisation Quality of Life BREF (WHOQoLBref) and the spirituality rating scale were adapted and used for data collection. Data were analyzed using descriptive statistics (means, standard deviations, frequencies, and percentages) and partial least square structural equation modeling. The study revealed that healthcare providers had a great leaning toward the biological domain of the model compared to the other domains. Hence, participants’ QoL was not fully improved as suggested by the biopsychosocial(spiritual) model. Again, the QoL and spirituality of patients with SCD were quite high. A significant negative impact of spirituality on QoL was also found. Finally, the biosocial domain of the biopsychosocial-spiritual model was the most significant predictor of QoL. It was recommended that policymakers train healthcare providers to integrate the psychosocial-spiritual component in health services. Also, education on SCD and its resultant impact from the domains of the model should be intensified while health practitioners consider utilizing these components fully in the management of the condition.

Keywords: biopsychosocial (spritual), sickle cell disease, quality of life, healthcare, accra

Procedia PDF Downloads 76
15976 A Corpus Output Error Analysis of Chinese L2 Learners From America, Myanmar, and Singapore

Authors: Qiao-Yu Warren Cai

Abstract:

Due to the rise of big data, building corpora and using them to analyze ChineseL2 learners’ language output has become a trend. Various empirical research has been conducted using Chinese corpora built by different academic institutes. However, most of the research analyzed the data in the Chinese corpora usingcorpus-based qualitative content analysis with descriptive statistics. Descriptive statistics can be used to make summations about the subjects or samples that research has actually measured to describe the numerical data, but the collected data cannot be generalized to the population. Comte, a Frenchpositivist, has argued since the 19th century that human beings’ knowledge, whether the discipline is humanistic and social science or natural science, should be verified in a scientific way to construct a universal theory to explain the truth and human beings behaviors. Inferential statistics, able to make judgments of the probability of a difference observed between groups being dependable or caused by chance (Free Geography Notes, 2015)and to infer from the subjects or examples what the population might think or behave, is just the right method to support Comte’s argument in the field of TCSOL. Also, inferential statistics is a core of quantitative research, but little research has been conducted by combing corpora with inferential statistics. Little research analyzes the differences in Chinese L2 learners’ language corpus output errors by using theOne-way ANOVA so that the findings of previous research are limited to inferring the population's Chinese errors according to the given samples’ Chinese corpora. To fill this knowledge gap in the professional development of Taiwanese TCSOL, the present study aims to utilize the One-way ANOVA to analyze corpus output errors of Chinese L2 learners from America, Myanmar, and Singapore. The results show that no significant difference exists in ‘shì (是) sentence’ and word order errors, but compared with Americans and Singaporeans, it is significantly easier for Myanmar to have ‘sentence blends.’ Based on the above results, the present study provides an instructional approach and contributes to further exploration of how Chinese L2 learners can have (and use) learning strategies to lower errors.

Keywords: Chinese corpus, error analysis, one-way analysis of variance, Chinese L2 learners, Americans, myanmar, Singaporeans

Procedia PDF Downloads 107
15975 Numerical Study of Elastic Performances of Sandwich Beam with Carbon-Fibre Reinforced Skins

Authors: Soukaina Ounss, Hamid Mounir, Abdellatif El Marjani

Abstract:

Sandwich materials with composite reinforced skins are mostly required in advanced construction applications with a view to ensure resistant structures. Their lightweight, their high flexural stiffness and their optimal thermal insulation make them a suitable solution to obtain efficient structures with performing rigidity and optimal energy safety. In this paper, the mechanical behavior of a sandwich beam with composite skins reinforced by unidirectional carbon fibers is investigated numerically through analyzing the impact of reinforcements specifications on the longitudinal elastic modulus in order to select the adequate sandwich configuration that has an interesting rigidity and an accurate convergence to the analytical approach which is proposed to verify performed numerical simulations. Therefore, concerned study starts by testing flexion performances of skins with various fibers orientations and volume fractions to determine those to use in sandwich beam. For that, the combination of a reinforcement inclination of 30° and a volume ratio of 60% is selected with the one with 60° of fibers orientation and 40% of volume fraction, this last guarantees to chosen skins an important rigidity with an optimal fibers concentration and a great enhance in convergence to analytical results in the sandwich model for the reason of the crucial core role as transverse shear absorber. Thus, a resistant sandwich beam is elaborated from a face-sheet constituted from two layers of previous skins with fibers oriented in 60° and an epoxy core; concerned beam has a longitudinal elastic modulus of 54 Gpa (gigapascal) that equals to the analytical value by a negligible error of 2%.

Keywords: fibers orientation, fibers volume ratio, longitudinal elastic modulus, sandwich beam

Procedia PDF Downloads 174
15974 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 119
15973 Modelling the Education Supply Chain with Network Data Envelopment Analysis

Authors: Sourour Ramzi, Claudia Sarrico

Abstract:

Little has been done on network DEA in education, and nobody has attempted to model the whole education supply chain using network DEA. As such the contribution of the present paper is to propose a model for measuring the efficiency of education supply chains using network DEA. First, we use a general survey of data envelopment analysis (DEA) to establish the emergent themes for research in DEA, and focus on the theme of Network DEA. Second, we use a survey on two-stage DEA models, and Network DEA to write a state of the art on Network DEA, particularly applied to supply chain management. Third, we use a survey on DEA applications to establish the most influential papers on DEA education applications, in order to establish the state of the art on applications of DEA in education, in general, and applications of DEA to education using network DEA, in particular. Finally, we propose a model for measuring the performance of education supply chains of different education systems (countries or states within a country, for instance). We then use this model on some empirical data.

Keywords: supply chain, education, data envelopment analysis, network DEA

Procedia PDF Downloads 370
15972 Modeling the Impact of Controls on Information System Risks

Authors: M. Ndaw, G. Mendy, S. Ouya

Abstract:

Information system risk management helps to reduce or eliminate risk by implementing appropriate controls. In this paper, we propose a quantification model of controls impact on information system risks by automatizing the residual criticality estimation step of FMECA which is based on a inductive reasoning. For this, we defined three equations based on type and maturity of controls. For testing, the values obtained with the model were compared to estimated values given by interlocutors during different working sessions and the result is satisfactory. This model allows an optimal assessment of controls maturity and facilitates risk analysis of information system.

Keywords: information system, risk, control, FMECA method

Procedia PDF Downloads 356
15971 A Fractional Derivative Model to Quantify Non-Darcy Flow in Porous and Fractured Media

Authors: Golden J. Zhang, Dongbao Zhou

Abstract:

Darcy’s law is the fundamental theory in fluid dynamics and engineering applications. Although Darcy linearity was found to be valid for slow, viscous flow, non-linear and non-Darcian flow has been well documented under both small and large velocity fluid flow. Various classical models were proposed and used widely to quantify non-Darcian flow, including the well-known Forchheimer, Izbash, and Swartzendruber models. Applications, however, revealed limitations of these models. Here we propose a general model built upon the Caputo fractional derivative to quantify non-Darcian flow for various flows (laminar to turbulence).Real-world applications and model comparisons showed that the new fractional-derivative model, which extends the fractional model proposed recently by Zhou and Yang (2018), can capture the non-Darcian flow in the relatively small velocity in low-permeability deposits and the relatively high velocity in high-permeability sand. A scale effect was also identified for non-Darcian flow in fractured rocks. Therefore, fractional calculus may provide an efficient tool to improve classical models to quantify fluid dynamics in aquatic environments.

Keywords: fractional derivative, darcy’s law, non-darcian flow, fluid dynamics

Procedia PDF Downloads 129
15970 Disintegration of Deuterons by Photons Reaction Model for GEANT4 with Dibaryon Formalism

Authors: Jae Won Shin, Chang Ho Hyun

Abstract:

A disintegration of deuterons by photons (dγ → np) reaction model for GEANT4 is developed in this work. An effective field theory with dibaryon fields Introducing a dibaryon field, we can take into account the effective range contribution to the propagator up to infinite order, and it consequently makes the convergence of the theory better than the pionless effective field theory without dibaryon fields. We develop a hadronic model for GEANT4 which is specialized for the disintegration of the deuteron by photons, dγ → np. For the description of two-nucleon interactions, we employ an effective field theory so called pionless theory with dibaryon fields (dEFT). In spite of its simplicity, the theory has proven very effective and useful in the applications to various two-nucleon systems and processes at low energies. We apply the new model of GEANT4 (G4dEFT) to the calculation of total and differential cross sections in dγ → np, and obtain good agreements to experimental data for a wide range of incoming photon energies.

Keywords: dγ → np, dibaryon fields, effective field theory, GEANT4

Procedia PDF Downloads 380
15969 Applicability and Reusability of Fly Ash and Base Treated Fly Ash for Adsorption of Catechol from Aqueous Solution: Equilibrium, Kinetics, Thermodynamics and Modeling

Authors: S. Agarwal, A. Rani

Abstract:

Catechol is a natural polyphenolic compound that widely exists in higher plants such as teas, vegetables, fruits, tobaccos, and some traditional Chinese medicines. The fly ash-based zeolites are capable of absorbing a wide range of pollutants. But the process of zeolite synthesis is time-consuming and requires technical setups by the industries. The marketed costs of zeolites are quite high restricting its use by small-scale industries for the removal of phenolic compounds. The present research proposes a simple method of alkaline treatment of FA to produce an effective adsorbent for catechol removal from wastewater. The experimental parameter such as pH, temperature, initial concentration and adsorbent dose on the removal of catechol were studied in batch reactor. For this purpose the adsorbent materials were mixed with aqueous solutions containing catechol ranging in 50 – 200 mg/L initial concentrations and then shaken continuously in a thermostatic Orbital Incubator Shaker at 30 ± 0.1 °C for 24 h. The samples were withdrawn from the shaker at predetermined time interval and separated by centrifugation (Centrifuge machine MBL-20) at 2000 rpm for 4 min. to yield a clear supernatant for analysis of the equilibrium concentrations of the solutes. The concentrations were measured with Double Beam UV/Visible spectrophotometer (model Spectrscan UV 2600/02) at the wavelength of 275 nm for catechol. In the present study, the use of low-cost adsorbent (BTFA) derived from coal fly ash (FA), has been investigated as a substitute of expensive methods for the sequestration of catechol. The FA and BTFA adsorbents were well characterized by XRF, FE-SEM with EDX, FTIR, and surface area and porosity measurement which proves the chemical constituents, functional groups and morphology of the adsorbents. The catechol adsorption capacities of synthesized BTFA and native material were determined. The adsorption was slightly increased with an increase in pH value. The monolayer adsorption capacities of FA and BTFA for catechol were 100 mg g⁻¹ and 333.33 mg g⁻¹ respectively, and maximum adsorption occurs within 60 minutes for both adsorbents used in this test. The equilibrium data are fitted by Freundlich isotherm found on the basis of error analysis (RMSE, SSE, and χ²). Adsorption was found to be spontaneous and exothermic on the basis of thermodynamic parameters (ΔG°, ΔS°, and ΔH°). Pseudo-second-order kinetic model better fitted the data for both FA and BTFA. BTFA showed large adsorptive characteristics, high separation selectivity, and excellent recyclability than FA. These findings indicate that BTFA could be employed as an effective and inexpensive adsorbent for the removal of catechol from wastewater.

Keywords: catechol, fly ash, isotherms, kinetics, thermodynamic parameters

Procedia PDF Downloads 127
15968 Evaluation of Liquefaction Potential of Fine Grained Soil: Kerman Case Study

Authors: Reza Ziaie Moayed, Maedeh Akhavan Tavakkoli

Abstract:

This research aims to investigate and evaluate the liquefaction potential in a project in Kerman city based on different methods for fine-grained soils. Examining the previous damages caused by recent earthquakes, it has been observed that fine-grained soils play an essential role in the level of damage caused by soil liquefaction. But, based on previous investigations related to liquefaction, there is limited attention to evaluating the cyclic resistance ratio for fine-grain soils, especially with the SPT method. Although using a standard penetration test (SPT) to find the liquefaction potential of fine-grain soil is not common, it can be a helpful method based on its rapidness, serviceability, and availability. In the present study, the liquefaction potential has been first determined by the soil’s physical properties obtained from laboratory tests. Then, using the SPT test and its available criterion for evaluating the cyclic resistance ratio and safety factor of liquefaction, the correction of effecting fine-grained soils is made, and then the results are compared. The results show that using the SPT test for liquefaction is more accurate than using laboratory tests in most cases due to the contribution of different physical parameters of soil, which leads to an increase in the ultimate N₁(60,cs).

Keywords: liquefaction, cyclic resistance ratio, SPT test, clay soil, cohesion soils

Procedia PDF Downloads 102
15967 A Semidefinite Model to Quantify Dynamic Forces in the Powertrain of Torque Regulated Bascule Bridge Machineries

Authors: Kodo Sektani, Apostolos Tsouvalas, Andrei Metrikine

Abstract:

The reassessment of existing movable bridges in The Netherlands has created the need for acceptance/rejection criteria to assess whether the machineries are meet certain design demands. However, the existing design code defines a different limit state design, meant for new machineries which is based on a simple linear spring-mass model. Observations show that existing bridges do not confirm the model predictions. In fact, movable bridges are nonlinear systems consisting of mechanical components, such as, gears, electric motors and brakes. Next to that, each movable bridge is characterized by a unique set of parameters. However, in the existing code various variables that describe the physical characteristics of the bridge are neglected or replaced by partial factors. For instance, the damping ratio ζ, which is different for drawbridges compared to bascule bridges, is taken as a constant for all bridge types. In this paper, a model is developed that overcomes some of the limitations of existing modelling approaches to capture the dynamics of the powertrain of a class of bridge machineries First, a semidefinite dynamic model is proposed, which accounts for stiffness, damping, and some additional variables of the physical system, which are neglected by the code, such as nonlinear braking torques. The model gives an upper bound of the peak forces/torques occurring in the powertrain during emergency braking. Second, a discrete nonlinear dynamic model is discussed, with realistic motor torque characteristics during normal operation. This model succeeds to accurately predict the full time history of the occurred stress state of the opening and closing cycle for fatigue purposes.

Keywords: Dynamics of movable bridges, Bridge machinery, Powertrains, Torque measurements

Procedia PDF Downloads 158
15966 Global Analysis of HIV Virus Models with Cell-to-Cell

Authors: Hossein Pourbashash

Abstract:

Recent experimental studies have shown that HIV can be transmitted directly from cell to cell when structures called virological synapses form during interactions between T cells. In this article, we describe a new within-host model of HIV infection that incorporates two mechanisms: infection by free virions and the direct cell-to-cell transmission. We conduct the local and global stability analysis of the model. We show that if the basic reproduction number R0 1, the virus is cleared and the disease dies out; if R0 > 1, the virus persists in the host. We also prove that the unique positive equilibrium attracts all positive solutions under additional assumptions on the parameters.

Keywords: HIV virus model, cell-to-cell transmission, global stability, Lyapunov function, second compound matrices

Procedia PDF Downloads 519
15965 Economical Working Hours per Workday for a Production Worker under Hazardous Environment

Authors: Mohammed Darwish

Abstract:

Workplace injuries cost organizations significant amount of money. Causes of injuries at workplace are very well documented in the literature and attributed to variety of reasons. One important reason is the long working-hours. The purpose of this paper is to develop a mathematical model that finds the optimal working-hours at workplace. The developed model minimizes the expected total cost which consists of the expected cost incurred due to unsafe conditions of workplace, the other cost is related to the lost production due to work incidents, and the production cost.

Keywords: 8-hour workday, mathematical model, optimal working hours, workplace injuries

Procedia PDF Downloads 156
15964 In silico Model of Transamination Reaction Mechanism

Authors: Sang-Woo Han, Jong-Shik Shin

Abstract:

w-Transaminase (w-TA) is broadly used for synthesizing chiral amines with a high enantiopurity. However, the reaction mechanism of w-TA has been not well studied, contrary to a-transaminase (a-TA) such as AspTA. Here, we propose in silico model on the reaction mechanism of w-TA. Based on the modeling results which showed large free energy gaps between external aldimine and quinonoid on deamination (or ketimine and quinonoid on amination), withdrawal of Ca-H seemed as a critical step which determines the reaction rate on both amination and deamination reactions, which is consistent with previous researches. Hyperconjugation was also observed in both external aldimine and ketimine which weakens Ca-H bond to elevate Ca-H abstraction.

Keywords: computational modeling, reaction intermediates, w-transaminase, in silico model

Procedia PDF Downloads 548
15963 Academic Staff Recruitment in Islamic University: A Proposed Holistic Model

Authors: Syahruddin Sumardi Samindjaya, Indra Fajar Alamsyah, Junaidah Hashim

Abstract:

Purpose: This study attempts to explore and presents a proposed recruitment model in Islamic university which aligned with holistic role. Design/methodology/approach: It is a conceptual paper in nature. In turn, this study is designed to utilize exploratory approach. Literature and document review that related to this topic are used as the methods to analyse the content found. Findings: Recruitment for any organization is fundamental to achieve its goal effectively. Staffing in universities is vital due to the important role of lecturers. Currently, Islamic universities still adopt the common process of recruitment for their academic staffs. Whereas, they have own characteristics which are embedded in their institutions. Furthermore, the FCWC (Foundation, Capability, Worldview and Commitment) model of recruitment proposes to suit the holistic character of Islamic university. Research limitation/implications: Further studies are required to empirically validate the concept through systematic investigations. Additionally, measuring this model by a designed means is appreciated. Practical implications: The model provides the map and alternative tool of recruitment for Islamic universities to determine the process of recruitment which can appropriate their institutions. In addition, it also allows stakeholders and policy makers to consider regarding Islamic values that should inculcate in the Islamic higher learning institutions. Originality/value: This study initiates a foundational contribution for an early sequence of research.

Keywords: academic staff, Islamic values, recruitment model, university

Procedia PDF Downloads 187
15962 Comparison of Different Data Acquisition Techniques for Shape Optimization Problems

Authors: Attila Vámosi, Tamás Mankovits, Dávid Huri, Imre Kocsis, Tamás Szabó

Abstract:

Non-linear FEM calculations are indispensable when important technical information like operating performance of a rubber component is desired. Rubber bumpers built into air-spring structures may undergo large deformations under load, which in itself shows non-linear behavior. The changing contact range between the parts and the incompressibility of the rubber increases this non-linear behavior further. The material characterization of an elastomeric component is also a demanding engineering task. The shape optimization problem of rubber parts led to the study of FEM based calculation processes. This type of problems was posed and investigated by several authors. In this paper the time demand of certain calculation methods are studied and the possibilities of time reduction is presented.

Keywords: rubber bumper, data acquisition, finite element analysis, support vector regression

Procedia PDF Downloads 474
15961 Numerical Prediction of Width Crack of Concrete Dapped-End Beams

Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo

Abstract:

Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.

Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis

Procedia PDF Downloads 170
15960 Efficiency Improvement of REV-Method for Calibration of Phased Array Antennas

Authors: Daniel Hristov

Abstract:

The paper describes the principle of operation, simulation and physical validation of method for simultaneous acquisition of gain and phase states of multiple antenna elements and the corresponding feed lines across a Phased Array Antenna (PAA). The derived values for gain and phase are used for PAA-calibration. The method utilizes the Rotating-Element Electric- Field Vector (REV) principle currently used for gain and phase state estimation of single antenna element across an active antenna aperture. A significant reduction of procedure execution time is achieved with simultaneous setting of different phase delays to multiple phase shifters, followed by a single power measurement. The initial gain and phase states are calculated using spectral and correlation analysis of the measured power series.

Keywords: antenna, antenna arrays, calibration, phase measurement, power measurement

Procedia PDF Downloads 139
15959 Mass Transfer Studies of Carbon Dioxide Absorption in Sodium Hydroxide in Millichannels

Authors: A. Durgadevi, S. Pushpavanam

Abstract:

In this work, absorption studies are done by conducting experiments of 99.9 (v/v%) pure CO₂ with various concentrations of sodium hydroxide solutions in a T-junction glass circular milli-channel. The gas gets absorbed in the aqueous phase resulting in the shrinking of slugs. This phenomenon is used to develop a lumped parameter model. Using this model, the chemical dissolution dynamics and the mass transfer characteristics of the CO₂-NaOH system is analysed. The liquid side mass transfer coefficient is determined with the help of the experimental data.

Keywords: absorption, dissolution dynamics, lumped parameter model, milli-channel, mass transfer coefficient

Procedia PDF Downloads 285
15958 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 147
15957 Factors Influencing University Student's Acceptance of New Technology

Authors: Fatma Khadra

Abstract:

The objective of this research is to identify the acceptance of new technology in a sample of 150 Participants from Qatar University. Based on the Technology Acceptance Model (TAM), we used the Davis’s scale (1989) which contains two item scales for Perceived Usefulness and Perceived Ease of Use. The TAM represents an important theoretical contribution toward understanding how users come to accept and use technology. This model suggests that when people are presented with a new technology, a number of variables influence their decision about how and when they will use it. The results showed that participants accept more technology because flexibility, clarity, enhancing the experience, enjoying, facility, and useful. Also, results showed that younger participants accept more technology than others.

Keywords: new technology, perceived usefulness, perceived ease of use, technology acceptance model

Procedia PDF Downloads 324
15956 Design of Knowledge Management System with Geographic Information System

Authors: Angga Hidayah Ramadhan, Luciana Andrawina, M. Azani Hasibuan

Abstract:

Data will be as a core of the decision if it has a good treatment or process, which is process that data into information, and information into knowledge to make a wisdom or decision. Today, many companies have not realize it include XYZ University Admission Directorate as executor of National Admission called Seleksi Masuk Bersama (SMB) that during the time, the workers only uses their feeling to make a decision. Whereas if it done, then that company can analyze the data to make a right decision to get a pin sales from student candidate or registrant that follow SMB as many as possible. Therefore, needs Knowledge Management System (KMS) with Geographic Information System (GIS) use 5C4C that can process that company data becomes more useful and can help make decisions. This information system can process data into information based on the pin sold data with 5C (Contextualized, Categorize, Calculation, Correction, Condensed) and convert information into knowledge with 4C (Comparing, Consequence, Connection, Conversation) that has been several steps until these data can be useful to make easier to take a decision or wisdom, resolve problems, communicate, and quicker to learn to the employees have not experience and also for ease of viewing/visualization based on spatial data that equipped with GIS functionality that can be used to indicate events in each province with indicator that facilitate in this system. The system also have a function to save the tacit on the system then to be proceed into explicit in expert system based on the problems that will be found from the consequences of information. With the system each team can make a decision with same ways, structured, and the important is based on the actual event/data.

Keywords: 5C4C, data, information, knowledge

Procedia PDF Downloads 464
15955 Multiscale Syntheses of Knee Collateral Ligament Stresses: Aggregate Mechanics as a Function of Molecular Properties

Authors: Raouf Mbarki, Fadi Al Khatib, Malek Adouni

Abstract:

Knee collateral ligaments play a significant role in restraining excessive frontal motion (varus/valgus rotations). In this investigation, a multiscale frame was developed based on structural hierarchies of the collateral ligaments starting from the bottom (tropocollagen molecule) to up where the fibred reinforced structure established. Experimental data of failure tensile test were considered as the principal driver of the developed model. This model was calibrated statistically using Bayesian calibration due to the high number of unknown parameters. Then the model is scaled up to fit the real structure of the collateral ligaments and simulated under realistic boundary conditions. Predications have been successful in describing the observed transient response of the collateral ligaments during tensile test under pre- and post-damage loading conditions. Collateral ligaments maximum stresses and strengths were observed near to the femoral insertions, a results that is in good agreement with experimental investigations. Also for the first time, damage initiation and propagation were documented with this model as a function of the cross-link density between tropocollagen molecules.

Keywords: multiscale model, tropocollagen, fibrils, ligaments commas

Procedia PDF Downloads 160
15954 Temporal and Spatial Distribution Prediction of Patinopecten yessoensis Larvae in Northern China Yellow Sea

Authors: RuiJin Zhang, HengJiang Cai, JinSong Gui

Abstract:

It takes Patinopecten yessoensis larvae more than 20 days from spawning to settlement. Due to the natural environmental factors such as current, Patinopecten yessoensis larvae are transported to a distance more than hundreds of kilometers, leading to a high instability of their spatial and temporal distribution and great difficulties in the natural spat collection. Therefore predicting the distribution is of great significance to improve the operating efficiency of the collecting. Hydrodynamic model of Northern China Yellow Sea was established and the motions equations of physical oceanography and verified by the tidal harmonic constants and the measured data velocities of Dalian Bay. According to the passivity drift characteristics of the larvae, combined with the hydrodynamic model and the particle tracking model, the spatial and temporal distribution prediction model was established and the spatial and temporal distribution of the larvae under the influence of flow and wind were simulated. It can be concluded from the model results: ocean currents have greatest impacts on the passive drift path and diffusion of Patinopecten yessoensis larvae; the impact of wind is also important, which changed the direction and speed of the drift. Patinopecten yessoensis larvae were generated in the sea along Zhangzi Island and Guanglu-Dachangshan Island, but after two months, with the impact of wind and currents, the larvae appeared in the west of Dalian and the southern of Lvshun, and even in Bohai Bay. The model results are consistent with the relevant literature on qualitative analysis, and this conclusion explains where the larvae come from in the perspective of numerical simulation.

Keywords: numerical simulation, Patinopecten yessoensis larvae, predicting model, spatial and temporal distribution

Procedia PDF Downloads 306
15953 Building Education Leader Capacity through an Integrated Information and Communication Technology Leadership Model and Tool

Authors: Sousan Arafeh

Abstract:

Educational systems and schools worldwide are increasingly reliant on information and communication technology (ICT). Unfortunately, most educational leadership development programs do not offer formal curricular and/or field experiences that prepare students for managing ICT resources, personnel, and processes. The result is a steep learning curve for the leader and his/her staff and dissipated organizational energy that compromises desired outcomes. To address this gap in education leaders’ development, Arafeh’s Integrated Technology Leadership Model (AITLM) was created. It is a conceptual model and tool that educational leadership students can use to better understand the ICT ecology that exists within their schools. The AITL Model consists of six 'infrastructure types' where ICT activity takes place: technical infrastructure, communications infrastructure, core business infrastructure, context infrastructure, resources infrastructure, and human infrastructure. These six infrastructures are further divided into 16 key areas that need management attention. The AITL Model was created by critically analyzing existing technology/ICT leadership models and working to make something more authentic and comprehensive regarding school leaders’ purview and experience. The AITL Model then served as a tool when it was distributed to over 150 educational leadership students who were asked to review it and qualitatively share their reactions. Students said the model presented crucial areas of consideration that they had not been exposed to before and that the exercise of reviewing and discussing the AITL Model as a group was useful for identifying areas of growth that they could pursue in the leadership development program and in their professional settings. While development in all infrastructures and key areas was important for students’ understanding of ICT, they noted that they were least aware of the importance of the intangible area of the resources infrastructure. The AITL Model will be presented and session participants will have an opportunity to review and reflect on its impact and utility. Ultimately, the AITL Model is one that could have significant policy and practice implications. At the very least, it might help shape ICT content in educational leadership development programs through curricular and pedagogical updates.

Keywords: education leadership, information and communications technology, ICT, leadership capacity building, leadership development

Procedia PDF Downloads 117
15952 Estimating the Volatilite of Stock Markets in Case of Financial Crisis

Authors: Gultekin Gurcay

Abstract:

In this paper, effects and responses of stock were analyzed. This analysis was done periodically. The dimensions of the financial crisis impact on the stock market were investigated by GARCH model. In this context, S&P 500 stock market is modeled with DAX, NIKKEI and BIST100. In this way, The effects of the changing in S&P 500 stock market were examined on European and Asian stock markets. Conditional variance coefficient will be calculated through garch model. The scope of the crisis period, the conditional covariance coefficient will be analyzed comparatively.

Keywords: conditional variance coefficient, financial crisis, garch model, stock market

Procedia PDF Downloads 296