Search results for: daily probability model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19542

Search results for: daily probability model

16362 Constructing Service Innovation Model for SMEs in Automotive Service Industries: A Case Study of Auto Repair Motorcycle in Makassar City

Authors: Muhammad Farid, Jen Der Day

Abstract:

The purpose of this study is to explore the construct of service innovation model for Small and medium-sized enterprises (SMEs) in automotive service industries. A case study of repair shop of the motorcycle at Makassar city illustrates measure innovation implementation, the degree of innovation, and identifies the type of innovation by the service innovation model for SMEs. In this paper, we interview 10 managers of SMEs and analyze their answers. We find that innovation implementation has been slowly; only producing new service innovation 0.62 unit average per year. Incremental innovation is the present option for SMEs, because they choose safer roads to improve service continuously. If want to create radical innovation, they still consider the aspect of cost, system, and readiness of human resources.

Keywords: service innovation, incremental innovation, SMEs, automotive service industries

Procedia PDF Downloads 355
16361 Proposition Model of Micromechanical Damage to Predict Reduction in Stiffness of a Fatigued A-SMC Composite

Authors: Houssem Ayari

Abstract:

Sheet molding compounds (SMC) are high strength thermoset moulding materials reinforced with glass treated with thermocompression. SMC composites combine fibreglass resins and polyester/phenolic/vinyl and unsaturated acrylic to produce a high strength moulding compound. These materials are usually formulated to meet the performance requirements of the moulding part. In addition, the vinyl ester resins used in the new advanced SMC systems (A-SMC) have many desirable features, including mechanical properties comparable to epoxy, excellent chemical resistance and tensile resistance, and cost competitiveness. In this paper, a proposed model is used to take into account the Young modulus evolutions of advanced SMC systems (A-SMC) composite under fatigue tests. The proposed model and the used approach are in good agreement with the experimental results.

Keywords: composites SFRC, damage, fatigue, Mori-Tanaka

Procedia PDF Downloads 111
16360 Human Performance Technology (HPT) as an Entry Point to Achieve Organizational Development in Educational Institutions of the Ministry of Education

Authors: Alkhathlan Mansour

Abstract:

Current research aims at achieving the organizational development in the educational institutions in the governorate of Al-Kharj through the human performance technology (HPT) model that is named; “The Intellectual Model to improve human performance”. To achieve the goal of this research, it tools -that it is consisting of targeted questionnaires to research sample numbered (120)- have been set up. This sample is represented in; department managers in Prince Sattam Bin Abdulaziz University (50), educational supervisors in the Department of Education (40), school administrators in the governorate (30), and the views of education experts through personal interviews in the proposal to achieve organizational development through the intellectual model to improve human performance. Among the most important research results is that there are many obstacles prevent the organizational development in the educational institutions, so the research suggested a model to achieve organizational development through human performance technologies, as well as the researcher recommended through the results of his research that the administrators have to take into account the justice in the distribution of incentives to employees of educational institutions and training leaders in educational institutions on organizational development strategies and working on the preparation of experts of organizational development in the educational institutions to develop the necessary policies and procedures of each institution.

Keywords: human performance, development, education, organizational

Procedia PDF Downloads 285
16359 In vitro Skin Model for Enhanced Testing of Antimicrobial Textiles

Authors: Steven Arcidiacono, Robert Stote, Erin Anderson, Molly Richards

Abstract:

There are numerous standard test methods for antimicrobial textiles that measure activity against specific microorganisms. However, many times these results do not translate to the performance of treated textiles when worn by individuals. Standard test methods apply a single target organism grown under optimal conditions to a textile, then recover the organism to quantitate and determine activity; this does not reflect the actual performance environment that consists of polymicrobial communities in less than optimal conditions or interaction of the textile with the skin substrate. Here we propose the development of in vitro skin model method to bridge the gap between lab testing and wear studies. The model will consist of a defined polymicrobial community of 5-7 commensal microbes simulating the skin microbiome, seeded onto a solid tissue platform to represent the skin. The protocol would entail adding a non-commensal test organism of interest to the defined community and applying a textile sample to the solid substrate. Following incubation, the textile would be removed and the organisms recovered, which would then be quantitated to determine antimicrobial activity. Important parameters to consider include identification and assembly of the defined polymicrobial community, growth conditions to allow the establishment of a stable community, and choice of skin surrogate. This model could answer the following questions: 1) is the treated textile effective against the target organism? 2) How is the defined community affected? And 3) does the textile cause unwanted effects toward the skin simulant? The proposed model would determine activity under conditions comparable to the intended application and provide expanded knowledge relative to current test methods.

Keywords: antimicrobial textiles, defined polymicrobial community, in vitro skin model, skin microbiome

Procedia PDF Downloads 129
16358 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning

Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan

Abstract:

The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.

Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass

Procedia PDF Downloads 114
16357 The Strategy of Teaching Digital Art in Classroom as a Way of Enhancing Pupils’ Artistic Creativity

Authors: Aber Salem Aboalgasm, Rupert Ward

Abstract:

Teaching art by digital means is a big challenge for the majority of teachers of art and artistic design courses in primary education schools. These courses can clearly identify relationships between art, technology and creativity in the classroom .The aim of this article is to present a modern way of teaching art, using digital tools in the art classroom in order to improve creative ability in pupils aged between 9 and 11 years; it also presents a conceptual model for creativity based on digital art. The model could be useful for pupils interested in learning drawing and using an e-drawing package, and for teachers who are interested in teaching their students modern digital art, and improving children’s creativity. This model is designed to show the strategy of teaching art through technology, in order for children to learn how to be creative. This will also help education providers to make suitable choices about which technological approaches they should choose to teach students and enhance their creative ability. To define the digital art tools that can benefit children develop their technical skills. It is also expected that use of this model will help to develop social interactive qualities that may improve intellectual ability.

Keywords: digital tools, motivation, creative activity, technical skill

Procedia PDF Downloads 457
16356 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility

Authors: Fu Jinyu, Lin Jinguan

Abstract:

This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.

Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate

Procedia PDF Downloads 151
16355 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution

Authors: Nikolay P. Brayanov, Anna V. Stoynova

Abstract:

Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.

Keywords: embedded code generation, embedded C code quality, embedded systems, model-based development

Procedia PDF Downloads 238
16354 Predictive Semi-Empirical NOx Model for Diesel Engine

Authors: Saurabh Sharma, Yong Sun, Bruce Vernham

Abstract:

Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model.  Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.

Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical

Procedia PDF Downloads 110
16353 Unsteady Rayleigh-Bénard Convection of Nanoliquids in Enclosures

Authors: P. G. Siddheshwar, B. N. Veena

Abstract:

Rayleigh-B´enard convection of a nanoliquid in shallow, square and tall enclosures is studied using the Khanafer-Vafai-Lightstone single-phase model. The thermophysical properties of water, copper, copper-oxide, alumina, silver and titania at 3000 K under stagnant conditions that are collected from literature are used in calculating thermophysical properties of water-based nanoliquids. Phenomenological laws and mixture theory are used for calculating thermophysical properties. Free-free, rigid-rigid and rigid-free boundary conditions are considered in the study. Intractable Lorenz model for each boundary combination is derived and then reduced to the tractable Ginzburg-Landau model. The amplitude thus obtained is used to quantify the heat transport in terms of Nusselt number. Addition of nanoparticles is shown not to alter the influence of the nature of boundaries on the onset of convection as well as on heat transport. Amongst the three enclosures considered, it is found that tall and shallow enclosures transport maximum and minimum energy respectively. Enhancement of heat transport due to nanoparticles in the three enclosures is found to be in the range 3% - 11%. Comparison of results in the case of rigid-rigid boundaries is made with those of an earlier work and good agreement is found. The study has limitations in the sense that thermophysical properties are calculated by using various quantities modelled for static condition.

Keywords: enclosures, free-free, rigid-rigid, rigid-free boundaries, Ginzburg-Landau model, Lorenz model

Procedia PDF Downloads 253
16352 Evaluation of Turbulence Prediction over Washington, D.C.: Comparison of DCNet Observations and North American Mesoscale Model Outputs

Authors: Nebila Lichiheb, LaToya Myles, William Pendergrass, Bruce Hicks, Dawson Cagle

Abstract:

Atmospheric transport of hazardous materials in urban areas is increasingly under investigation due to the potential impact on human health and the environment. In response to health and safety concerns, several dispersion models have been developed to analyze and predict the dispersion of hazardous contaminants. The models of interest usually rely on meteorological information obtained from the meteorological models of NOAA’s National Weather Service (NWS). However, due to the complexity of the urban environment, NWS forecasts provide an inadequate basis for dispersion computation in urban areas. A dense meteorological network in Washington, DC, called DCNet, has been operated by NOAA since 2003 to support the development of urban monitoring methodologies and provide the driving meteorological observations for atmospheric transport and dispersion models. This study focuses on the comparison of wind observations from the DCNet station on the U.S. Department of Commerce Herbert C. Hoover Building against the North American Mesoscale (NAM) model outputs for the period 2017-2019. The goal is to develop a simple methodology for modifying NAM outputs so that the dispersion requirements of the city and its urban area can be satisfied. This methodology will allow us to quantify the prediction errors of the NAM model and propose adjustments of key variables controlling dispersion model calculation.

Keywords: meteorological data, Washington D.C., DCNet data, NAM model

Procedia PDF Downloads 229
16351 Simulation Model of Induction Heating in COMSOL Multiphysics

Authors: K. Djellabi, M. E. H. Latreche

Abstract:

The induction heating phenomenon depends on various factors, making the problem highly nonlinear. The mathematical analysis of this problem in most cases is very difficult and it is reduced to simple cases. Another knowledge of induction heating systems is generated in production environments, but these trial-error procedures are long and expensive. The numerical models of induction heating problem are another approach to reduce abovementioned drawbacks. This paper deals with the simulation model of induction heating problem. The simulation model of induction heating system in COMSOL Multiphysics is created. In this work we present results of numerical simulations of induction heating process in pieces of cylindrical shapes, in an inductor with four coils. The modeling of the inducting heating process was made with the software COMSOL Multiphysics Version 4.2a, for the study we present the temperature charts.

Keywords: induction heating, electromagnetic field, inductor, numerical simulation, finite element

Procedia PDF Downloads 308
16350 The Impact of Corporate Social Responsibility and Relationship Marketing on Relationship Maintainer and Customer Loyalty by Mediating Role of Customer Satisfaction

Authors: Anam Bhatti, Sumbal Arif, Mariam Mehar, Sohail Younas

Abstract:

CSR has become one of the imperative implements in satisfying customers. The impartial of this research is to calculate CSR, relationship marketing, and customer satisfaction. In Pakistan, there is not enough research work on the effect of CSR and relationship marketing on relationship maintainer and customer loyalty. To find out deductive approach and survey method is used as research approach and research strategy respectively. This research design is descriptive and quantitative study. For data, collection questionnaire method with semantic differential scale and seven point scales are adopted. Data has been collected by adopting the non-probability convenience technique as sampling technique and the sample size is 400. For factor confirmatory factor analysis, structure equation modeling and medication analysis, regression analysis Amos software were used. Strong empirical evidence supports that the customer’s perception of CSR performance is highly influenced by the values.

Keywords: CSR, Relationship marketing, Relationship maintainer, Customer loyalty, Customer satisfaction

Procedia PDF Downloads 474
16349 Comparison of Johnson-Cook and Barlat Material Model for 316L Stainless Steel

Authors: Yiğit Gürler, İbrahim Şimşek, Müge Savaştaer, Ayberk Karakuş, Alper Taşdemirci

Abstract:

316L steel is frequently used in the industry due to its easy formability and accessibility in sheet metal forming processes. Numerical and experimental studies are frequently encountered in the literature to examine the mechanical behavior of 316L stainless steel during the forming process. 316L stainless steel is the most common material used in the production of plate heat exchangers and plate heat exchangers are produced by plastic deformation of the stainless steel. The motivation in this study is to determine the appropriate material model during the simulation of the sheet metal forming process. For this reason, two different material models were examined and Ls-Dyna material cards were created using material test data. These are MAT133_BARLAT_YLD2000 and MAT093_SIMPLIFIED_JOHNSON_COOK. In order to compare results of the tensile test & hydraulic bulge test performed both numerically and experimentally. The obtained results were evaluated comparatively and the most suitable material model was selected for the forming simulation. In future studies, this material model will be used in the numerical modeling of the sheet metal forming process.

Keywords: 316L, mechanical characterization, metal forming, Ls-Dyna

Procedia PDF Downloads 324
16348 Comparative Analysis of Dissimilarity Detection between Binary Images Based on Equivalency and Non-Equivalency of Image Inversion

Authors: Adnan A. Y. Mustafa

Abstract:

Image matching is a fundamental problem that arises frequently in many aspects of robot and computer vision. It can become a time-consuming process when matching images to a database consisting of hundreds of images, especially if the images are big. One approach to reducing the time complexity of the matching process is to reduce the search space in a pre-matching stage, by simply removing dissimilar images quickly. The Probabilistic Matching Model for Binary Images (PMMBI) showed that dissimilarity detection between binary images can be accomplished quickly by random pixel mapping and is size invariant. The model is based on the gamma binary similarity distance that recognizes an image and its inverse as containing the same scene and hence considers them to be the same image. However, in many applications, an image and its inverse are not treated as being the same but rather dissimilar. In this paper, we present a comparative analysis of dissimilarity detection between PMMBI based on the gamma binary similarity distance and a modified PMMBI model based on a similarity distance that does distinguish between an image and its inverse as being dissimilar.

Keywords: binary image, dissimilarity detection, probabilistic matching model for binary images, image mapping

Procedia PDF Downloads 149
16347 Probabilistic Graphical Model for the Web

Authors: M. Nekri, A. Khelladi

Abstract:

The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.

Keywords: clustering coefficient, preferential attachment, small world, web community

Procedia PDF Downloads 269
16346 Application of Data Mining Techniques for Tourism Knowledge Discovery

Authors: Teklu Urgessa, Wookjae Maeng, Joong Seek Lee

Abstract:

Application of five implementations of three data mining classification techniques was experimented for extracting important insights from tourism data. The aim was to find out the best performing algorithm among the compared ones for tourism knowledge discovery. Knowledge discovery process from data was used as a process model. 10-fold cross validation method is used for testing purpose. Various data preprocessing activities were performed to get the final dataset for model building. Classification models of the selected algorithms were built with different scenarios on the preprocessed dataset. The outperformed algorithm tourism dataset was Random Forest (76%) before applying information gain based attribute selection and J48 (C4.5) (75%) after selection of top relevant attributes to the class (target) attribute. In terms of time for model building, attribute selection improves the efficiency of all algorithms. Artificial Neural Network (multilayer perceptron) showed the highest improvement (90%). The rules extracted from the decision tree model are presented, which showed intricate, non-trivial knowledge/insight that would otherwise not be discovered by simple statistical analysis with mediocre accuracy of the machine using classification algorithms.

Keywords: classification algorithms, data mining, knowledge discovery, tourism

Procedia PDF Downloads 291
16345 A Literature Review of the Trend towards Indoor Dynamic Thermal Comfort

Authors: James Katungyi

Abstract:

The Steady State thermal comfort model which dominates thermal comfort practice and which posits the ideal thermal conditions in a narrow range of thermal conditions does not deliver the expected comfort levels among occupants. Furthermore, the buildings where this model is applied consume a lot of energy in conditioning. This paper reviews significant literature about thermal comfort in dynamic indoor conditions including the adaptive thermal comfort model and alliesthesia. A major finding of the paper is that the adaptive thermal comfort model is part of a trend from static to dynamic indoor environments in aspects such as lighting, views, sounds and ventilation. Alliesthesia or thermal delight is consistent with this trend towards dynamic thermal conditions. It is within this trend that the two fold goal of increased thermal comfort and reduced energy consumption lies. At the heart of this trend is a rediscovery of the link between the natural environment and human well-being, a link that was partially severed by over-reliance on mechanically dominated artificial indoor environments. The paper concludes by advocating thermal conditioning solutions that integrate mechanical with natural thermal conditioning in a balanced manner in order to meet occupant thermal needs without endangering the environment.

Keywords: adaptive thermal comfort, alliesthesia, energy, natural environment

Procedia PDF Downloads 217
16344 Reducing the Risk of Alcohol Relapse after Liver-Transplantation

Authors: Rebeca V. Tholen, Elaine Bundy

Abstract:

Background: Liver transplantation (LT) is considered the only curative treatment for end-stage liver disease Background: Liver transplantation (LT) is considered the only curative treatment for end-stage liver disease (ESLD). The effects of alcoholism can cause irreversible liver damage, cirrhosis and subsequent liver failure. Alcohol relapse after transplant occurs in 20-50% of patients and increases the risk for recurrent cirrhosis, organ rejection, and graft failure. Alcohol relapse after transplant has been identified as a problem among liver transplant recipients at a large urban academic transplant center in the United States. Transplantation will reverse the complications of ESLD, but it does not treat underlying alcoholism or reduce the risk of relapse after transplant. The purpose of this quality improvement project is to implement and evaluate the effectiveness of a High-Risk Alcoholism Relapse (HRAR) Scale to screen and identify patients at high-risk for alcohol relapse after receiving an LT. Methods: The HRAR Scale is a predictive tool designed to determine the severity of alcoholism and risk of relapse after transplant. The scale consists of three variables identified as having the highest predictive power for early relapse including, daily number of drinks, history of previous inpatient treatment for alcoholism, and the number of years of heavy drinking. All adult liver transplant recipients at a large urban transplant center were screened with the HRAR Scale prior to hospital discharge. A zero to two ordinal score is ranked for each variable, and the total score ranges from zero to six. High-risk scores are between three to six. Results: Descriptive statistics revealed 25 patients were newly transplanted and discharged from the hospital during an 8-week period. 40% of patients (n=10) were identified as being high-risk for relapse and 60% low-risk (n=15). The daily number of drinks were determined by alcohol content (1 drink = 15g of ethanol) and number of drinks per day. 60% of patients reported drinking 9-17 drinks per day, and 40% reported ≤ 9 drinks. 50% of high-risk patients reported drinking ≥ 25 years, 40% for 11-25 years, and 10% ≤ 11 years. For number of inpatient treatments for alcoholism, 50% received inpatient treatment one time, 20% ≥ 1, and 30% reported never receiving inpatient treatment. Findings reveal the importance and value of a validated screening tool as a more efficient method than other screening methods alone. Integration of a structured clinical tool will help guide the drinking history portion of the psychosocial assessment. Targeted interventions can be implemented for all high-risk patients. Conclusions: Our findings validate the effectiveness of utilizing the HRAR scale to screen and identify patients who are a high-risk for alcohol relapse post-LT. Recommendations to help maintain post-transplant sobriety include starting a transplant support group within the organization for all high-risk patients. (ESLD). The effects of alcoholism can cause irreversible liver damage, cirrhosis and subsequent liver failure. Alcohol relapse after transplant occurs in 20-50% of patients, and increases the risk for recurrent cirrhosis, organ rejection, and graft failure. Alcohol relapse after transplant has been identified as a problem among liver transplant recipients at a large urban academic transplant center in the United States. Transplantation will reverse the complications of ESLD, but it does not treat underlying alcoholism or reduce the risk of relapse after transplant. The purpose of this quality improvement project is to implement and evaluate the effectiveness of a High-Risk Alcoholism Relapse (HRAR) Scale to screen and identify patients at high-risk for alcohol relapse after receiving a LT. Methods: The HRAR Scale is a predictive tool designed to determine severity of alcoholism and risk of relapse after transplant. The scale consists of three variables identified as having the highest predictive power for early relapse including, daily number of drinks, history of previous inpatient treatment for alcoholism, and the number of years of heavy drinking. All adult liver transplant recipients at a large urban transplant center were screened with the HRAR Scale prior to hospital discharge. A zero to two ordinal score is ranked for each variable, and the total score ranges from zero to six. High-risk scores are between three to six. Results: Descriptive statistics revealed 25 patients were newly transplanted and discharged from the hospital during an 8-week period. 40% of patients (n=10) were identified as being high-risk for relapse and 60% low-risk (n=15). The daily number of drinks were determined by alcohol content (1 drink = 15g of ethanol) and number of drinks per day. 60% of patients reported drinking 9-17 drinks per day, and 40% reported ≤ 9 drinks. 50% of high-risk patients reported drinking ≥ 25 years, 40% for 11-25 years, and 10% ≤ 11 years. For number of inpatient treatments for alcoholism, 50% received inpatient treatment one time, 20% ≥ 1, and 30% reported never receiving inpatient treatment. Findings reveal the importance and value of a validated screening tool as a more efficient method than other screening methods alone. Integration of a structured clinical tool will help guide the drinking history portion of the psychosocial assessment. Targeted interventions can be implemented for all high-risk patients. Conclusions: Our findings validate the effectiveness of utilizing the HRAR scale to screen and identify patients who are a high-risk for alcohol relapse post-LT. Recommendations to help maintain post-transplant sobriety include starting a transplant support group within the organization for all high-risk patients.

Keywords: alcoholism, liver transplant, quality improvement, substance abuse

Procedia PDF Downloads 111
16343 A Review of HVDC Modular Multilevel Converters Subjected to DC and AC Faults

Authors: Jude Inwumoh, Adam P. R. Taylor, Kosala Gunawardane

Abstract:

Modular multilevel converters (MMC) exhibit a highly scalable and modular characteristic with good voltage/power expansion, fault tolerance capability, low output harmonic content, good redundancy, and a flexible front-end configuration. Fault detection, location, and isolation, as well as maintaining fault ride-through (FRT), are major challenges to MMC reliability and power supply sustainability. Different papers have been reviewed to seek the best MMC configuration with fault capability. DC faults are the most common fault, while the probability that AC fault occurs in a modular multilevel converter (MCC) is low; though, AC faults consequence are severe. This paper reviews several MMC topologies and modulation techniques in tackling faults. These fault control strategies are compared based on cost, complexity, controllability, and power loss. A meshed network of half-bridge (HB) MMC topology was optimal in rendering fault ride through than any other MMC topologies but only when combined with DC circuit breakers (CBS), AC CBS, and fault current limiters (FCL).

Keywords: MMC-HVDC, DC faults, fault current limiters, control scheme

Procedia PDF Downloads 132
16342 Stress and Strain Analysis of Notched Bodies Subject to Non-Proportional Loadings

Authors: Ayhan Ince

Abstract:

In this paper, an analytical simplified method for calculating elasto-plastic stresses strains of notched bodies subject to non-proportional loading paths is discussed. The method was based on the Neuber notch correction, which relates the incremental elastic and elastic-plastic strain energy densities at the notch root and the material constitutive relationship. The validity of the method was presented by comparing computed results of the proposed model against finite element numerical data of notched shaft. The comparison showed that the model estimated notch-root elasto-plastic stresses strains with good accuracy using linear-elastic stresses. The prosed model provides more efficient and simple analysis method preferable to expensive experimental component tests and more complex and time consuming incremental non-linear FE analysis. The model is particularly suitable to perform fatigue life and fatigue damage estimates of notched components subjected to non-proportional loading paths.

Keywords: elasto-plastic, stress-strain, notch analysis, nonprortional loadings, cyclic plasticity, fatigue

Procedia PDF Downloads 459
16341 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.

Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC

Procedia PDF Downloads 237
16340 Exploring a Teaching Model in Cultural Education Using Video-Focused Social Networking Apps: An Example of Chinese Language Teaching for African Students

Authors: Zhao Hong

Abstract:

When international students study Chinese as a foreign or second language, it is important for them to form constructive viewpoints and possess an open mindset on Chinese culture. This helps them to make faster progress in their language acquisition. Observations from African students at Liaoning Institute of Science and Technology show that by integrating video-focused social networking apps such as Tiktok (“Douyin”) on a controlled basis, students raise their interest not only in making an effort in learning the Chinese language, but also in the understanding of the Chinese culture. During the last twelve months, our research group explored a teaching model using selected contents in certain classroom settings, including virtual classrooms during lockdown periods due to the COVID-19 pandemic. Using interviews, a survey was conducted on international students from African countries at the Liaoning Institute of Science and Technology in Chinese language courses. Based on the results, a teaching model was built for Chinese language acquisition by entering the "mobile Chinese culture".

Keywords: Chinese as a foreign language, cultural education, social networking apps, teaching model

Procedia PDF Downloads 70
16339 The Reliability Analysis of Concrete Chimneys Due to Random Vortex Shedding

Authors: Saba Rahman, Arvind K. Jain, S. D. Bharti, T. K. Datta

Abstract:

Chimneys are generally tall and slender structures with circular cross-sections, due to which they are highly prone to wind forces. Wind exerts pressure on the wall of the chimneys, which produces unwanted forces. Vortex-induced oscillation is one of such excitations which can lead to the failure of the chimneys. Therefore, vortex-induced oscillation of chimneys is of great concern to researchers and practitioners since many failures of chimneys due to vortex shedding have occurred in the past. As a consequence, extensive research has taken place on the subject over decades. Many laboratory experiments have been performed to verify the theoretical models proposed to predict vortex-induced forces, including aero-elastic effects. Comparatively, very few proto-type measurement data have been recorded to verify the proposed theoretical models. Because of this reason, the theoretical models developed with the help of experimental laboratory data are utilized for analyzing the chimneys for vortex-induced forces. This calls for reliability analysis of the predictions of the responses of the chimneys produced due to vortex shedding phenomena. Although several works of literature exist on the vortex-induced oscillation of chimneys, including code provisions, the reliability analysis of chimneys against failure caused due to vortex shedding is scanty. In the present study, the reliability analysis of chimneys against vortex shedding failure is presented, assuming the uncertainty in vortex shedding phenomena to be significantly more than other uncertainties, and hence, the latter is ignored. The vortex shedding is modeled as a stationary random process and is represented by a power spectral density function (PSDF). It is assumed that the vortex shedding forces are perfectly correlated and act over the top one-third height of the chimney. The PSDF of the tip displacement of the chimney is obtained by performing a frequency domain spectral analysis using a matrix approach. For this purpose, both chimney and random wind forces are discretized over a number of points along with the height of the chimney. The method of analysis duly accounts for the aero-elastic effects. The double barrier threshold crossing level, as proposed by Vanmarcke, is used for determining the probability of crossing different threshold levels of the tip displacement of the chimney. Assuming the annual distribution of the mean wind velocity to be a Gumbel type-I distribution, the fragility curve denoting the variation of the annual probability of threshold crossing against different threshold levels of the tip displacement of the chimney is determined. The reliability estimate is derived from the fragility curve. A 210m tall concrete chimney with a base diameter of 35m, top diameter as 21m, and thickness as 0.3m has been taken as an illustrative example. The terrain condition is assumed to be that corresponding to the city center. The expression for the PSDF of the vortex shedding force is taken to be used by Vickery and Basu. The results of the study show that the threshold crossing reliability of the tip displacement of the chimney is significantly influenced by the assumed structural damping and the Gumbel distribution parameters. Further, the aero-elastic effect influences the reliability estimate to a great extent for small structural damping.

Keywords: chimney, fragility curve, reliability analysis, vortex-induced vibration

Procedia PDF Downloads 155
16338 Forensic Analysis of Thumbnail Images in Windows 10

Authors: George Kurian, Hongmei Chi

Abstract:

Digital evidence plays a critical role in most legal investigations. In many cases, thumbnail databases show important information in that investigation. The probability of having digital evidence retrieved from a computer or smart device has increased, even though the previous user removed data and deleted apps on those devices. Due to the increase in digital forensics, the ability to store residual information from various thumbnail applications has improved. This paper will focus on investigating thumbnail information from Windows 10. Thumbnail images of interest in forensic investigations may be intact even when the original pictures have been deleted. It is our research goal to recover useful information from thumbnails. In this research project, we use various forensics tools to collect left thumbnail information from deleted videos or pictures. We examine and describe the various thumbnail sources in Windows and propose a methodology for thumbnail collection and analysis from laptops or desktops. A machine learning algorithm is adopted to help speed up content from thumbnail pictures.

Keywords: digital forensic, forensic tools, soundness, thumbnail, machine learning, OCR

Procedia PDF Downloads 127
16337 Simulation of Growth and Yield of Rice Under Irrigation and Nitrogen Management Using ORYZA2000

Authors: Mojtaba Esmaeilzad Limoudehi

Abstract:

To evaluate the model ORYZA2000, under the management of irrigation and nitrogen fertilization experiment, a split plot with a randomized complete block design with three replications on hybrid cultivars (spring) in the 1388-1387 crop year was conducted at the Rice Research Institute. Permanent flood irrigation as the main plot in the fourth level, around 5 days, from 11 days to 8 days away, and the four levels of nitrogen fertilizer as the subplots 0, 90, 120, and 150 kg N Ha were considered. Simulated and measured values of leaf area index, grain yield, and biological parameters using the regression coefficient, t-test, the root mean square error (RMSE), and normalized root mean square error (RMSEn) were performed. Results, the normalized root mean square error of 10% in grain yield, the biological yield of 9%, and 23% of maximum LAI was determined. The simulation results show that grain yield and biological ORYZA2000 model accuracy are good but do not simulate maximum LAI well. The results show that the model can support ORYZA2000 test results and can be used under conditions of nitrogen fertilizer and irrigation management.

Keywords: evaluation, rice, nitrogen fertilizer, model ORYZA2000

Procedia PDF Downloads 67
16336 A Study on Improvement of the Torque Ripple and Demagnetization Characteristics of a PMSM

Authors: Yong Min You

Abstract:

The study on the torque ripple of Permanent Magnet Synchronous Motors (PMSMs) has been rapidly progressed, which effects on the noise and vibration of the electric vehicle. There are several ways to reduce torque ripple, which are the increase in the number of slots and poles, the notch of the rotor and stator teeth, and the skew of the rotor and stator. However, the conventional methods have the disadvantage in terms of material cost and productivity. The demagnetization characteristic of PMSMs must be attained for electric vehicle application. Due to rare earth supply issue, the demand for Dy-free permanent magnet has been increasing, which can be applied to PMSMs for the electric vehicle. Dy-free permanent magnet has lower the coercivity; the demagnetization characteristic has become more significant. To improve the torque ripple as well as the demagnetization characteristics, which are significant parameters for electric vehicle application, an unequal air-gap model is proposed for a PMSM. A shape optimization is performed to optimize the design variables of an unequal air-gap model. Optimal design variables are the shape of an unequal air-gap and the angle between V-shape magnets. An optimization process is performed by Latin Hypercube Sampling (LHS), Kriging Method, and Genetic Algorithm (GA). Finite element analysis (FEA) is also utilized to analyze the torque and demagnetization characteristics. The torque ripple and the demagnetization temperature of the initial model of 45kW PMSM with unequal air-gap are 10 % and 146.8 degrees, respectively, which are reaching a critical level for electric vehicle application. Therefore, the unequal air-gap model is proposed, and then an optimization process is conducted. Compared to the initial model, the torque ripple of the optimized unequal air-gap model was reduced by 7.7 %. In addition, the demagnetization temperature of the optimized model was also increased by 1.8 % while maintaining the efficiency. From these results, a shape optimized unequal air-gap PMSM has shown the usefulness of an improvement in the torque ripple and demagnetization temperature for the electric vehicle.

Keywords: permanent magnet synchronous motor, optimal design, finite element method, torque ripple

Procedia PDF Downloads 272
16335 Existence Theory for First Order Functional Random Differential Equations

Authors: Rajkumar N. Ingle

Abstract:

In this paper, the existence of a solution of nonlinear functional random differential equations of the first order is proved under caratheodory condition. The study of the functional random differential equation has got importance in the random analysis of the dynamical systems of universal phenomena. Objectives: Nonlinear functional random differential equation is useful to the scientists, engineers, and mathematicians, who are engaged in N.F.R.D.E. analyzing a universal random phenomenon, govern by nonlinear random initial value problems of D.E. Applications of this in the theory of diffusion or heat conduction. Methodology: Using the concepts of probability theory, functional analysis, generally the existence theorems for the nonlinear F.R.D.E. are prove by using some tools such as fixed point theorem. The significance of the study: Our contribution will be the generalization of some well-known results in the theory of Nonlinear F.R.D.E.s. Further, it seems that our study will be useful to scientist, engineers, economists and mathematicians in their endeavors to analyses the nonlinear random problems of the universe in a better way.

Keywords: Random Fixed Point Theorem, functional random differential equation, N.F.R.D.E., universal random phenomenon

Procedia PDF Downloads 497
16334 Using Combination of Sets of Features of Molecules for Aqueous Solubility Prediction: A Random Forest Model

Authors: Muhammet Baldan, Emel Timuçin

Abstract:

Generally, absorption and bioavailability increase if solubility increases; therefore, it is crucial to predict them in drug discovery applications. Molecular descriptors and Molecular properties are traditionally used for the prediction of water solubility. There are various key descriptors that are used for this purpose, namely Drogan Descriptors, Morgan Descriptors, Maccs keys, etc., and each has different prediction capabilities with differentiating successes between different data sets. Another source for the prediction of solubility is structural features; they are commonly used for the prediction of solubility. However, there are little to no studies that combine three or more properties or descriptors for prediction to produce a more powerful prediction model. Unlike available models, we used a combination of those features in a random forest machine learning model for improved solubility prediction to better predict and, therefore, contribute to drug discovery systems.

Keywords: solubility, random forest, molecular descriptors, maccs keys

Procedia PDF Downloads 40
16333 WhatsApp as Part of a Blended Learning Model to Help Programming Novices

Authors: Tlou J. Ramabu

Abstract:

Programming is one of the challenging subjects in the field of computing. In the higher education sphere, some programming novices’ performance, retention rate, and success rate are not improving. Most of the time, the problem is caused by the slow pace of learning, difficulty in grasping the syntax of the programming language and poor logical skills. More importantly, programming forms part of major subjects within the field of computing. As a result, specialized pedagogical methods and innovation are highly recommended. Little research has been done on the potential productivity of the WhatsApp platform as part of a blended learning model. In this article, the authors discuss the WhatsApp group as a part of blended learning model incorporated for a group of programming novices. We discuss possible administrative activities for productive utilisation of the WhatsApp group on the blended learning overview. The aim is to take advantage of the popularity of WhatsApp and the time students spend on it for their educational purpose. We believe that blended learning featuring a WhatsApp group may ease novices’ cognitive load and strengthen their foundational programming knowledge and skills. This is a work in progress as the proposed blended learning model with WhatsApp incorporated is yet to be implemented.

Keywords: blended learning, higher education, WhatsApp, programming, novices, lecturers

Procedia PDF Downloads 167