Search results for: equation modeling methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19340

Search results for: equation modeling methods

13400 Derivatives Balance Method for Linear and Nonlinear Control Systems

Authors: Musaab Mohammed Ahmed Ali, Vladimir Vodichev

Abstract:

work deals with an universal control technique or single controller for linear and nonlinear stabilization and tracing control systems. These systems may be structured as SISO and MIMO. Parameters of controlled plants can vary over a wide range. Introduced a novel control systems design method, construction of stable platform orbits using derivative balance, solved transfer function stability preservation problem of linear system under partial substitution of a rational function. Universal controller is proposed as a polar system with the multiple orbits to simplify design procedure, where each orbit represent single order of controller transfer function. Designed controller consist of proportional, integral, derivative terms and multiple feedback and feedforward loops. The controller parameters synthesis method is presented. In generally, controller parameters depend on new polynomial equation where all parameters have a relationship with each other and have fixed values without requirements of retuning. The simulation results show that the proposed universal controller can stabilize infinity number of linear and nonlinear plants and shaping desired previously ordered performance. It has been proven that sensor errors and poor performance will be completely compensated and cannot affect system performance. Disturbances and noises effect on the controller loop will be fully rejected. Technical and economic effect of using proposed controller has been investigated and compared to adaptive, predictive, and robust controllers. The economic analysis shows the advantage of single controller with fixed parameters to drive infinity numbers of plants compared to above mentioned control techniques.

Keywords: derivative balance, fixed parameters, stable platform, universal control

Procedia PDF Downloads 124
13399 Optimization of Turbocharged Diesel Engines

Authors: Ebrahim Safarian, Kadir Bilen, Akif Ceviz

Abstract:

The turbocharger and turbocharging have been the inherent component of diesel engines, so that critical parameters of such engines, as BSFC(Brake Specific Fuel Consumption) or thermal efficiency, fuel consumption, BMEP(Brake Mean Effective Pressure), the power density output and emission level have been improved extensively. In general, the turbocharger can be considered as the most complex component of diesel engines, because it has closely interrelated turbomachinery concepts of the turbines and the compressors to thermodynamic fundamentals of internal combustion engines and stress analysis of all components. In this paper, a waste gate for a conventional single stage radial turbine is investigated by consideration of turbochargers operation constrains and engine operation conditions, without any detail designs in the turbine and the compressor. Amount of opening waste gate which extended between the ranges of full opened and closed valve, is demonstrated by limiting compressor boost pressure ratio. Obtaining of an optimum point by regard above mentioned items is surveyed by three linked meanline modeling programs together which consist of Turbomatch®, Compal®, Rital®madules in concepts NREC® respectively.

Keywords: turbocharger, wastegate, diesel engine, concept NREC programs

Procedia PDF Downloads 231
13398 Approaches to Inducing Obsessional Stress in Obsessive-Compulsive Disorder (OCD): An Empirical Study with Patients Undergoing Transcranial Magnetic Stimulation (TMS) Therapy

Authors: Lucia Liu, Matthew Koziol

Abstract:

Obsessive-compulsive disorder (OCD), a long-lasting anxiety disorder involving recurrent, intrusive thoughts, affects over 2 million adults in the United States. Transcranial magnetic stimulation (TMS) stands out as a noninvasive, cutting-edge therapy that has been shown to reduce symptoms in patients with treatment-resistant OCD. The Food and Drug Administration (FDA) approved protocol pairs TMS sessions with individualized symptom provocation, aiming to improve the susceptibility of brain circuits to stimulation. However, limited standardization or guidance exists on how to conduct symptom provocation and which methods are most effective. This study aims to compare the effect of internal versus external techniques to induce obsessional stress in a clinical setting during TMS therapy. Two symptom provocation methods, (i) Asking patients thought-provoking questions about their obsessions (internal) and (ii) Requesting patients to perform obsession-related tasks (external), were employed in a crossover design with repeated measurement. Thirty-six treatments of NeuroStar TMS were administered to each of two patients over 8 weeks in an outpatient clinic. Patient One received 18 sessions of internal provocation followed by 18 sessions of external provocation, while Patient Two received 18 sessions of external provocation followed by 18 sessions of internal provocation. The primary outcome was the level of self-reported obsessional stress on a visual analog scale from 1 to 10. The secondary outcome was self-reported OCD severity, collected biweekly in a four-level Likert-scale (1 to 4) of bad, fair, good and excellent. Outcomes were compared and tested between provocation arms through repeated measures ANOVA, accounting for intra-patient correlations. Ages were 42 for Patient One (male, White) and 57 for Patient Two (male, White). Both patients had similar moderate symptoms at baseline, as determined through the Yale-Brown Obsessive Compulsive Scale (YBOCS). When comparing obsessional stress induced across the two arms of internal and external provocation methods, the mean (SD) was 6.03 (1.18) for internal and 4.01 (1.28) for external strategies (P=0.0019); ranges were 3 to 8 for internal and 2 to 8 for external strategies. Internal provocation yielded 5 (31.25%) bad, 6 (33.33%) fair, 3 (18.75%) good, and 2 (12.5%) excellent responses for OCD status, while external provocation yielded 5 (31.25%) bad, 9 (56.25%) fair, 1 (6.25%) good, and 1 (6.25%) excellent responses (P=0.58). Internal symptom provocation tactics had a significantly stronger impact on inducing obsessional stress and led to better OCD status (non-significant). This could be attributed to the fact that answering questions may prompt patients to reflect more on their lived experiences and struggles with OCD. In the future, clinical trials with larger sample sizes are warranted to validate this finding. Results support the increased integration of internal methods into structured provocation protocols, potentially reducing the time required for provocation and achieving greater treatment response to TMS.

Keywords: obsessive-compulsive disorder, transcranial magnetic stimulation, mental health, symptom provocation

Procedia PDF Downloads 42
13397 Scale Effects on the Wake Airflow of a Heavy Truck

Authors: Aude Pérard Lecomte, Georges Fokoua, Amine Mehel, Anne Tanière

Abstract:

Air quality in urban areas is deteriorated by pollution, mainly due to the constant increase of the traffic of different types of ground vehicles. In particular, particulate matter pollution with important concentrations in urban areas can cause serious health issues. Characterizing and understanding particle dynamics is therefore essential to establish recommendations to improve air quality in urban areas. To analyze the effects of turbulence on particulate pollutants dispersion, the first step is to focus on the single-phase flow structure and turbulence characteristics in the wake of a heavy truck model. To achieve this, Computational Fluid Dynamics (CFD) simulations were conducted with the aim of modeling the wake airflow of a full- and reduced-scale heavy truck. The Reynolds Average Navier-Stokes (RANS) approach with the Reynolds Stress Model (RSM)as the turbulence model closure was used. The simulations highlight the apparition of a large vortex coming from the under trailer. This vortex belongs to the recirculation region, located in the near-wake of the heavy truck. These vortical structures are expected to have a strong influence on particle dynamics that are emitted by the truck.

Keywords: CDF, heavy truck, recirculation region, reduced scale

Procedia PDF Downloads 202
13396 Developing High-Definition Flood Inundation Maps (HD-Fims) Using Raster Adjustment with Scenario Profiles (RASPTM)

Authors: Robert Jacobsen

Abstract:

Flood inundation maps (FIMs) are an essential tool in communicating flood threat scenarios to the public as well as in floodplain governance. With an increasing demand for online raster FIMs, the FIM State-of-the-Practice (SOP) is rapidly advancing to meet the dual requirements for high-resolution and high-accuracy—or High-Definition. Importantly, today’s technology also enables the resolution of problems of local—neighborhood-scale—bias errors that often occur in FIMs, even with the use of SOP two-dimensional flood modeling. To facilitate the development of HD-FIMs, a new GIS method--Raster Adjustment with Scenario Profiles, RASPTM—is described for adjusting kernel raster FIMs to match refined scenario profiles. With RASPTM, flood professionals can prepare HD-FIMs for a wide range of scenarios with available kernel rasters, including kernel rasters prepared from vector FIMs. The paper provides detailed procedures for RASPTM, along with an example of applying RASPTM to prepare an HD-FIM for the August 2016 Flood in Louisiana using both an SOP kernel raster and a kernel raster derived from an older vector-based flood insurance rate map. The accuracy of the HD-FIMs achieved with the application of RASPTM to the two kernel rasters is evaluated.

Keywords: hydrology, mapping, high-definition, inundation

Procedia PDF Downloads 58
13395 The Difference of Learning Outcomes in Reading Comprehension between Text and Film as The Media in Indonesian Language for Foreign Speaker in Intermediate Level

Authors: Siti Ayu Ningsih

Abstract:

This study aims to find the differences outcomes in learning reading comprehension with text and film as media on Indonesian Language for foreign speaker (BIPA) learning at intermediate level. By using quantitative and qualitative research methods, the respondent of this study is a single respondent from D'Royal Morocco Integrative Islamic School in grade nine from secondary level. Quantitative method used to calculate the learning outcomes that have been given the appropriate action cycle, whereas qualitative method used to translate the findings derived from quantitative methods to be described. The technique used in this study is the observation techniques and testing work. Based on the research, it is known that the use of the text media is more effective than the film for intermediate level of Indonesian Language for foreign speaker learner. This is because, when using film the learner does not have enough time to take note the difficult vocabulary and don't have enough time to look for the meaning of the vocabulary from the dictionary. While the use of media texts shows the better effectiveness because it does not require additional time to take note the difficult words. For the words that are difficult or strange, the learner can immediately find its meaning from the dictionary. The presence of the text is also very helpful for Indonesian Language for foreign speaker learner to find the answers according to the questions more easily. By matching the vocabulary of the question into the text references.

Keywords: Indonesian language for foreign speaker, learning outcome, media, reading comprehension

Procedia PDF Downloads 183
13394 Molecular Profiles of Microbial Etiologic Agents Forming Biofilm in Urinary Tract Infections of Pregnant Women by RTPCR Assay

Authors: B. Nageshwar Rao

Abstract:

Urinary tract infection (UTI) represents the most commonly acquired bacterial infection worldwide, with substantial morbidity, mortality, and economic burden. The objective of the study is to characterize the microbial profiles of uropathogenic in the obstetric population by RTPCR. Study design: An observational cross-sectional study was performed at a single tertiary health care hospital among 50 pregnant women with UTIs, including asymptomatic and symptomatic patients attending the outpatient department and inpatient department of Obstetrics and Gynaecology.Methods: Serotyping and genes detection of various uropathogens were studied using RTPCR. Pulse filed gel electrophoresis methods were used to determine the various genetic profiles. Results: The present study shows that CsgD protein, involved in biofilm formation in Escherichia coli, VIM1, IMP1 genes for Klebsiella were identified by using the RTPCR method. Our results showed that the prevalence of VIM1 and IMP1 genes and CsgD protein in E.coli showed a significant relationship between strong biofilm formation, and this may be due to the prevalence of specific genes. Finally, the genetic identification of RTPCR results for both bacteria was correlated with each other and concluded that the above uropathogens were common isolates in producing Biofilm in the pregnant woman suffering from urinary tract infection in our hospital observational study.

Keywords: biofilms, Klebsiella, E.coli, urinary tract infection

Procedia PDF Downloads 102
13393 The Impact of Professional Development in the Area of Technology Enhanced Learning on Higher Education Teaching Practices Across Atlantic Technological University – Research Methodology and Preliminary Findings

Authors: Annette Cosgrove

Abstract:

The objectives of this research study is to examine the impact of professional development in Technology Enhanced Learning (TEL) and the digitisation of learning in teaching communities across multiple higher education sites in the ATU (Atlantic Technological University *) ( 2020-2025), including the proposal of an evidence based digital teaching model for use in a future pandemic. The research strategy undertaken for this PhD Study is a multi-site study using mixed methods. Qualitative & quantitative methods are being used in the study to collect data. A pilot study was carried out initially , feedback collected and the research instrument was edited to reflect this feedback, before being administered. The purpose of the staff questionnaire is to evaluate the impact of professional development in the area of TEL, and to capture the practitioners views on the perceived impact on their teaching practice in the higher education sector across ATU (West of Ireland – 5 Higher education locations ). The phenomenon being explored is ‘ the impact of professional development in the area of technology enhanced learning and on teaching practice in a higher education institution.’ The research methodology chosen for this study is an Action based Research Study. The researcher has chosen this approach as it is a prime strategy for developing educational theory and enhancing educational practice . This study includes quantitative and qualitative methods to elicit data which will quantify the impact that continuous professional development in the area of digital teaching practice and technologies has on the practitioner’s teaching practice in higher education. The research instruments / data collection tools for this study include a lecturer survey with a targeted TEL Practice group ( Pre and post covid experience) and semi-structured interviews with lecturers.. This research is currently being conducted across the ATU multisite campus and targeting Higher education lecturers that have completed formal CPD in the area of digital teaching. ATU, a west of Ireland university is the focus of the study , The research questionnaire has been deployed, with 75 respondents to date across the ATU - the primary questionnaire and semi- formal interviews are ongoing currently – the purpose being to evaluate the impact of formal professional development in the area of TEL and its perceived impact on the practitioners teaching practice in the area of digital teaching and learning . This paper will present initial findings, reflections and data from this ongoing research study.

Keywords: TEL, DTL, digital teaching, digital assessment

Procedia PDF Downloads 50
13392 Noncommutative Differential Structure on Finite Groups

Authors: Ibtisam Masmali, Edwin Beggs

Abstract:

In this paper, we take example of differential calculi, on the finite group A4. Then, we apply methods of non-commutative of non-commutative differential geometry to this example, and see how similar the results are to those of classical differential geometry.

Keywords: differential calculi, finite group A4, Christoffel symbols, covariant derivative, torsion compatible

Procedia PDF Downloads 234
13391 Developing an Out-of-Distribution Generalization Model Selection Framework through Impurity and Randomness Measurements and a Bias Index

Authors: Todd Zhou, Mikhail Yurochkin

Abstract:

Out-of-distribution (OOD) detection is receiving increasing amounts of attention in the machine learning research community, boosted by recent technologies, such as autonomous driving and image processing. This newly-burgeoning field has called for the need for more effective and efficient methods for out-of-distribution generalization methods. Without accessing the label information, deploying machine learning models to out-of-distribution domains becomes extremely challenging since it is impossible to evaluate model performance on unseen domains. To tackle this out-of-distribution detection difficulty, we designed a model selection pipeline algorithm and developed a model selection framework with different impurity and randomness measurements to evaluate and choose the best-performing models for out-of-distribution data. By exploring different randomness scores based on predicted probabilities, we adopted the out-of-distribution entropy and developed a custom-designed score, ”CombinedScore,” as the evaluation criterion. This proposed score was created by adding labeled source information into the judging space of the uncertainty entropy score using harmonic mean. Furthermore, the prediction bias was explored through the equality of opportunity violation measurement. We also improved machine learning model performance through model calibration. The effectiveness of the framework with the proposed evaluation criteria was validated on the Folktables American Community Survey (ACS) datasets.

Keywords: model selection, domain generalization, model fairness, randomness measurements, bias index

Procedia PDF Downloads 111
13390 Non-Destructive Testing of Selective Laser Melting Products

Authors: Luca Collini, Michele Antolotti, Diego Schiavi

Abstract:

At present, complex geometries within production time shrinkage, rapidly increasing demand, and high-quality standard requirement make the non-destructive (ND) control of additively manufactured components indispensable means. On the other hand, a technology gap and the lack of standards regulating the methods and the acceptance criteria indicate the NDT of these components a stimulating field to be still fully explored. Up to date, penetrant testing, acoustic wave, tomography, radiography, and semi-automated ultrasound methods have been tested on metal powder based products so far. External defects, distortion, surface porosity, roughness, texture, internal porosity, and inclusions are the typical defects in the focus of testing. Detection of density and layers compactness are also been tried on stainless steels by the ultrasonic scattering method. In this work, the authors want to present and discuss the radiographic and the ultrasound ND testing on additively manufactured Ti₆Al₄V and inconel parts obtained by the selective laser melting (SLM) technology. In order to test the possibilities given by the radiographic method, both X-Rays and γ-Rays are tried on a set of specifically designed specimens realized by the SLM. The specimens contain a family of defectology, which represent the most commonly found, as cracks and lack of fusion. The tests are also applied to real parts of various complexity and thickness. A set of practical indications and of acceptance criteria is finally drawn.

Keywords: non-destructive testing, selective laser melting, radiography, UT method

Procedia PDF Downloads 129
13389 The Formulation of R&D Strategy for Biofuel Technology: A Case Study of the Aviation Industry in Iran

Authors: Maryam Amiri, Ali Rajabzade, Gholam Reza Goudarzi, Reza Heidari

Abstract:

Growth of technology and environmental changes are so fast and therefore, companies and industries have much tendency to do activities of R&D for active participation in the market and achievement to a competitive advantages. Aviation industry and its subdivisions have high level technology and play a special role in economic and social development of countries. So, in the aviation industry for getting new technologies and competing with other countries aviation industry, there is a requirement for capability in R&D. Considering of appropriate R&D strategy is supportive that day technologies of the world can be achieved. Biofuel technology is one of the newest technologies that has allocated discussion of the world in aviation industry to itself. The purpose of this research has been formulation of R&D strategy of biofuel technology in aviation industry of Iran. After reviewing of the theoretical foundations of the methods and R&D strategies, finally we classified R&D strategies in four main categories as follows: internal R&D, collaboration R&D, out sourcing R&D and in-house R&D. After a review of R&D strategies, a model for formulation of R&D strategy with the aim of developing biofuel technology in aviation industry in Iran was offered. With regard to the requirements and aracteristics of industry and technology in the model, we presented an integrated approach to R&D. Based on the techniques of decision making and analyzing of structured expert opinion, 4 R&D strategies for different scenarios and with the aim of developing biofuel technology in aviation industry in Iran were recommended. In this research, based on the common features of the implementation process of R&D, a logical classification of these methods are presented as R&D strategies. Then, R&D strategies and their characteristics was developed according to the experts. In the end, we introduced a model to consider the role of aviation industry and biofuel technology in R&D strategies. And lastly, for conditions and various scenarios of the aviation industry, we have formulated a specific R&D strategy.

Keywords: aviation industry, biofuel technology, R&D, R&D strategy

Procedia PDF Downloads 558
13388 Exploration of Artificial Neural Network and Response Surface Methodology in Removal of Industrial Effluents

Authors: Rakesh Namdeti

Abstract:

Toxic dyes found in industrial effluent must be treated before being disposed of due to their harmful impact on human health and aquatic life. Thus, Musa acuminata (Banana Leaves) was employed in the role of a biosorbent in this work to get rid of methylene blue derived from a synthetic solution. The effects of five process parameters, such as temperature, pH, biosorbent dosage, and initial methylene blue concentration, using a central composite design (CCD), and the percentage of dye clearance were investigated. The response was modelled using a quadratic model based on the CCD. The analysis of variance revealed the most influential element on experimental design response (ANOVA). The temperature of 44.30C, pH of 7.1, biosorbent dose of 0.3 g, starting methylene blue concentration of 48.4 mg/L, and 84.26 percent dye removal were the best conditions for Musa acuminata (Banana leave powder). At these ideal conditions, the experimental percentage of biosorption was 76.93. The link between the estimated results of the developed ANN model and the experimental results defined the success of ANN modeling. As a result, the study's experimental results were found to be quite close to the model's predicted outcomes.

Keywords: Musa acuminata, central composite design, methylene blue, artificial neural network

Procedia PDF Downloads 54
13387 Scope of Implmenting Building Information Modeling in (Aec) Industry Firms in India

Authors: Padmini Raman

Abstract:

The architecture, engineering, and construction (AEC) industry is facing enormous technological and institutional changes and challenges including the information technology and appropriate application of sustainable practices. The engineer and architect must be able to handle with a rapid pace of technological change. BIM is a unique process of producing and managing a building by exploring a digital module before the actual project is constructed and later during its construction, facility operation and maintenance. BIM has been Adopted by construction contractors and architects in the western country mostly in US and UK to improve the planning and management of construction projects. In India, BIM is a basic stage of adoption only, several issues about data acquisition and management comes during the design formation and planning of a construction project due to the complexity, ambiguity, and fragmented nature of the Indian construction industry. This paper tells about the view a strategy for India’s AEC firms to successfully implement BIM in their current working processes. By surveying and collecting data about problems faced by these architectural firms, it will be analysed how to avoid those situations from rising and, thus, introducing BIM Capabilities in such firms in the most effective way. while this application is widely accepted throughout the industry in many countries for managing project information for cost control and facilities management.

Keywords: AEC industry, building information module, Indian industry, new technology, BIM implementation in India

Procedia PDF Downloads 431
13386 Electrokinetic Remediation of Nickel Contaminated Clayey Soils

Authors: Waddah S. Abdullah, Saleh M. Al-Sarem

Abstract:

Electrokinetic remediation of contaminated soils has undoubtedly proven to be one of the most efficient techniques used to clean up soils contaminated with polar contaminants (such as heavy metals) and nonpolar organic contaminants. It can efficiently be used to clean up low permeability mud, wastewater, electroplating wastes, sludge, and marine dredging. EK processes have proved to be superior to other conventional methods, such as the pump and treat, and soil washing, since these methods are ineffective in such cases. This paper describes the use of electrokinetic remediation to clean up soils contaminated with nickel. Open cells, as well as advanced cylindrical cells, were used to perform electrokinetic experiments. Azraq green clay (low permeability soil, taken from the east part of Jordan) was used for the experiments. The clayey soil was spiked with 500 ppm of nickel. The EK experiments were conducted under direct current of 80 mA and 50 mA. Chelating agents (NaEDTA), disodium ethylene diamine-tetra-ascetic acid was used to enhance the electroremediation processes. The effect of carbonates presence in soils was, also, investigated by use of sodium carbonate. pH changes in the anode and the cathode compartments were controlled by using buffer solutions. The results showed that the average removal efficiency was 64%, for the Nickel spiked saturated clayey soil.Experiment results have shown that carbonates retarded the remediation process of nickel contaminated soils. Na-EDTA effectively enhanced the decontamination process, with removal efficiency increased from 64% without using the NaEDTA to over 90% after using Na-EDTA.

Keywords: buffer solution, contaminated soils, EDTA enhancement, electrokinetic processes, Nickel contaminated soil, soil remediation

Procedia PDF Downloads 233
13385 The Role of Development in Settling Migration Crisis: The Preventive Approach of the European Union in Relations with Sub-Saharan African States

Authors: Artsiom Zinchanka

Abstract:

The world faces now one of the largest migration crisis and the European Union meets challenges in accepting the flow of migrants that could not be handled finally at this step. This crisis is complicated with many factors, such as military conflict in the Middle East; absence of the appropriate conditions in the refugees’ camps; but also with the complicity of the migration flow consisting of the Sub-Saharan migrants. This type of migrants leave their homelands for many reasons including poverty, not appropriate level of social and economic conditions, absence of infrastructure and access to the education and medical care. In practice, when the restrictive approach directed to limit the flow of illicit migration and to send illicit migrants back to their homelands is not always working, the approach directed to the root causes of the migration crisis can be more effective in settling the crisis. The Cotonou Agreement and the following treaties concluded between the European Union, and Sub-Saharan states show that the European Union considers the development of human rights and appropriate social and economic conditions in the Sub-Saharan states as one of the most important factors addressing the migration crisis. The preventive approach as the efforts of the European Union to develop appropriate social and economic conditions in Sub-Saharan states is considered in this article, as well as its evolution and current condition. This article also considers pros and cons of this approach and the obstacles that this approach faces. The research methods include review of literature and documents, analytical and descriptive methods.

Keywords: migration crisis, preventive approach, Sub-Saharan States, the European Union

Procedia PDF Downloads 113
13384 Experiential Learning: Roles and Attributes of an Optometry Educator Recommended by a Millennial Generation

Authors: E. Kempen, M. J. Labuschagne, M. P. Jama

Abstract:

There is evidence that experiential learning is truly influential and favored by the millennial generation. However, little is known about the role and attributes an educator has to adopt during the experiential learning cycle, especially when applied in optometry education. This study aimed to identify the roles and attributes of an optometry educator during the different modes of the experiential learning cycle. Methods: A qualitative case study design was used. Data was collected using an open-ended questionnaire survey, following the application of nine different teaching-learning methods based on the experimental learning cycle. The total sample population of 68 undergraduate students from the Department of Optometry at the University of the Free State, South Africa were invited to participate. Focus group interviews (n=15) added additional data that contributed to the interpretation and confirmation of the data obtained from the questionnaire surveys. Results: The perceptions and experiences of the students identified a variety of roles and attributes as well as recommendations on the effective adoption of these roles and attributes. These roles and attributes included being knowledgeable, creating an interest, providing guidance, being approachable, building confidence, implementing ground rules, leading by example, and acting as a mediator. Conclusion: The findings suggest that the actions of an educator have the most substantial impact on students’ perception of a learning experience. Not only are the recommendations based on the views of a millennial generation, but the implementation of the personalized recommendations may also transform a learning environment. This may lead an optometry student to a deeper understanding of knowledge.

Keywords: experiences and perceptions, experiential learning, millennial generation, recommendation for optometry education

Procedia PDF Downloads 104
13383 Measurement of Echocardiographic Ejection Fraction Reference Values and Evaluation between Body Weight and Ejection Fraction in Domestic Rabbits (Oryctolagus cuniculus)

Authors: Reza Behmanesh, Mohammad Nasrolahzadeh-Masouleh, Ehsan Khaksar, Saeed Bokaie

Abstract:

Domestic rabbits (Oryctolagus cuniculus) are an excellent model for cardiovascular research because the size of these animals is more suitable for study and experimentation than smaller animals. One of the most important diagnostic imaging methods is echocardiography, which is used today to evaluate the anatomical and functional cardiovascular system and is one of the most accurate and sensitive non-invasive methods for examining heart disease. Ventricular function indices can be assessed with cardiac imaging techniques. One of these important cardiac parameters is the ejection fraction (EF), which has a valuable place along with other involved parameters. EF is a measure of the percentage of blood that comes out of the heart with each contraction. For this study, 100 adult and young standard domestic rabbits, six months to one year old and of both sexes (50 female and 50 male rabbits) without anesthesia and sedation were used. In this study, the mean EF in domestic rabbits studied in males was 58.753 ± 6.889 and in females, 61.397 ± 6.530, which are comparable to the items mentioned in the valid books and the average size of EF measured in this study; there is no significant difference between this research and other research. There was no significant difference in the percentage of EF between most weight groups, but there was a significant difference (p < 0.05) in weight groups (2161–2320 g and 2481–2640 g). Echocardiographic EF reference values for domestic rabbits (Oryctolagus cuniculus) non-anesthetized are presented, providing reference values for future studies.

Keywords: echocardiography, ejection fraction, rabbit, heart

Procedia PDF Downloads 79
13382 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization

Authors: Younis Elhaddad, Alfonso Ortega

Abstract:

Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.

Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production

Procedia PDF Downloads 150
13381 Freight Time and Cost Optimization in Complex Logistics Networks, Using a Dimensional Reduction Method and K-Means Algorithm

Authors: Egemen Sert, Leila Hedayatifar, Rachel A. Rigg, Amir Akhavan, Olha Buchel, Dominic Elias Saadi, Aabir Abubaker Kar, Alfredo J. Morales, Yaneer Bar-Yam

Abstract:

The complexity of providing timely and cost-effective distribution of finished goods from industrial facilities to customers makes effective operational coordination difficult, yet effectiveness is crucial for maintaining customer service levels and sustaining a business. Logistics planning becomes increasingly complex with growing numbers of customers, varied geographical locations, the uncertainty of future orders, and sometimes extreme competitive pressure to reduce inventory costs. Linear optimization methods become cumbersome or intractable due to a large number of variables and nonlinear dependencies involved. Here we develop a complex systems approach to optimizing logistics networks based upon dimensional reduction methods and apply our approach to a case study of a manufacturing company. In order to characterize the complexity in customer behavior, we define a “customer space” in which individual customer behavior is described by only the two most relevant dimensions: the distance to production facilities over current transportation routes and the customer's demand frequency. These dimensions provide essential insight into the domain of effective strategies for customers; direct and indirect strategies. In the direct strategy, goods are sent to the customer directly from a production facility using box or bulk trucks. In the indirect strategy, in advance of an order by the customer, goods are shipped to an external warehouse near a customer using trains and then "last-mile" shipped by trucks when orders are placed. Each strategy applies to an area of the customer space with an indeterminate boundary between them. Specific company policies determine the location of the boundary generally. We then identify the optimal delivery strategy for each customer by constructing a detailed model of costs of transportation and temporary storage in a set of specified external warehouses. Customer spaces help give an aggregate view of customer behaviors and characteristics. They allow policymakers to compare customers and develop strategies based on the aggregate behavior of the system as a whole. In addition to optimization over existing facilities, using customer logistics and the k-means algorithm, we propose additional warehouse locations. We apply these methods to a medium-sized American manufacturing company with a particular logistics network, consisting of multiple production facilities, external warehouses, and customers along with three types of shipment methods (box truck, bulk truck and train). For the case study, our method forecasts 10.5% savings on yearly transportation costs and an additional 4.6% savings with three new warehouses.

Keywords: logistics network optimization, direct and indirect strategies, K-means algorithm, dimensional reduction

Procedia PDF Downloads 125
13380 Simulation Analysis of a Full-Scale Five-Story Building with Vibration Control Dampers

Authors: Naohiro Nakamura

Abstract:

Analysis methods to accurately estimate the behavior of buildings when earthquakes occur is very important for improving the seismic safety of such buildings. Recently, the use of damping devices has increased significantly and there is a particular need to appropriately evaluate the behavior of buildings with such devices during earthquakes in the design stage. At present, however, the accuracy of the analysis evaluations is not sufficient. One reason is that the accuracy of current analysis methods has not been appropriately verified because there is very limited data on the behavior of actual buildings during earthquakes. Many types of shaking table test of large structures are performed at the '3-Dimensional Full-Scale Earthquake Testing Facility' (nicknamed 'E-Defense') operated by the National Research Institute of Earth Science and Disaster Prevention (NIED). In this study, simulations using 3- dimensional analysis models were conducted on shaking table test of a 5-story steel-frame structure with dampers. The results of the analysis correspond favorably to the test results announced afterward by the committee. However, the suitability of the parameters and models used in the analysis and the influence they had on the responses remain unclear. Hence, we conducted additional analysis and studies on these models and parameters. In this paper, outlines of the test are shown and the utilized analysis model is explained. Next, the analysis results are compared with the test results. Then, the additional analyses, concerning with the hysteresis curve of the dampers and the beam-end stiffness of the frame, are investigated.

Keywords: three-dimensional analysis, E-defense, full-scale experimen, vibration control damper

Procedia PDF Downloads 173
13379 Fecundity and Egg Laying in Helicoverpa armigera (Hübner) (Lepidoptera: Noctuidae): Model Development and Field Validation

Authors: Muhammad Noor Ul Ane, Dong-Soon Kim, Myron P. Zalucki

Abstract:

Models can be useful to help understand population dynamics of insects under diverse environmental conditions and in developing strategies to manage pest species better. Adult longevity and fecundity of Helicoverpa armigera (Hübner) were evaluated against a wide range of constant temperatures (15, 20, 25, 30, 35 and 37.5ᵒC). The modified Sharpe and DeMichele model described adult aging rate and was used to estimate adult physiological age. Maximum fecundity of H. armigera was 973 egg/female at 25ᵒC decreasing to 72 eggs/female at 37.5ᵒC. The relationship between adult fecundity and temperature was well described by an extreme value function. Age-specific cumulative oviposition rate and age-specific survival rate were well described by a two-parameter Weibull function and sigmoid function, respectively. An oviposition model was developed using three temperature-dependent components: total fecundity, age-specific oviposition rate, and age-specific survival rate. The oviposition model was validated against independent field data and described the field occurrence pattern of egg population of H. armigera very well. Our model should be a useful component for population modeling of H. armigera and can be independently used for the timing of sprays in management programs of this key pest species.

Keywords: cotton bollworm, life table, temperature-dependent adult development, temperature-dependent fecundity

Procedia PDF Downloads 136
13378 Combination of Geological, Geophysical and Reservoir Engineering Analyses in Field Development: A Case Study

Authors: Atif Zafar, Fan Haijun

Abstract:

A sequence of different Reservoir Engineering methods and tools in reservoir characterization and field development are presented in this paper. The real data of Jin Gas Field of L-Basin of Pakistan is used. The basic concept behind this work is to enlighten the importance of well test analysis in a broader way (i.e. reservoir characterization and field development) unlike to just determine the permeability and skin parameters. Normally in the case of reservoir characterization we rely on well test analysis to some extent but for field development plan, the well test analysis has become a forgotten tool specifically for locations of new development wells. This paper describes the successful implementation of well test analysis in Jin Gas Field where the main uncertainties are identified during initial stage of field development when location of new development well was marked only on the basis of G&G (Geologic and Geophysical) data. The seismic interpretation could not encounter one of the boundary (fault, sub-seismic fault, heterogeneity) near the main and only producing well of Jin Gas Field whereas the results of the model from the well test analysis played a very crucial rule in order to propose the location of second well of the newly discovered field. The results from different methods of well test analysis of Jin Gas Field are also integrated with and supported by other tools of Reservoir Engineering i.e. Material Balance Method and Volumetric Method. In this way, a comprehensive way out and algorithm is obtained in order to integrate the well test analyses with Geological and Geophysical analyses for reservoir characterization and field development. On the strong basis of this working and algorithm, it was successfully evaluated that the proposed location of new development well was not justified and it must be somewhere else except South direction.

Keywords: field development plan, reservoir characterization, reservoir engineering, well test analysis

Procedia PDF Downloads 349
13377 Removal of Chromium by UF5kDa Membrane: Its Characterization, Optimization of Parameters, and Evaluation of Coefficients

Authors: Bharti Verma, Chandrajit Balomajumder

Abstract:

Water pollution is escalated owing to industrialization and random ejection of one or more toxic heavy metal ions from the semiconductor industry, electroplating, metallurgical, mining, chemical manufacturing, tannery industries, etc., In semiconductor industry various kinds of chemicals in wafers preparation are used . Fluoride, toxic solvent, heavy metals, dyes and salts, suspended solids and chelating agents may be found in wastewater effluent of semiconductor manufacturing industry. Also in the chrome plating, in the electroplating industry, the effluent contains heavy amounts of Chromium. Since Cr(VI) is highly toxic, its exposure poses an acute risk of health. Also, its chronic exposure can even lead to mutagenesis and carcinogenesis. On the contrary, Cr (III) which is naturally occurring, is much less toxic than Cr(VI). Discharge limit of hexavalent chromium and trivalent chromium are 0.05 mg/L and 5 mg/L, respectively. There are numerous methods such as adsorption, chemical precipitation, membrane filtration, ion exchange, and electrochemical methods for the heavy metal removal. The present study focuses on the removal of Chromium ions by using flat sheet UF5kDa membrane. The Ultra filtration membrane process is operated above micro filtration membrane process. Thus separation achieved may be influenced due to the effect of Sieving and Donnan effect. Ultrafiltration is a promising method for the rejection of heavy metals like chromium, fluoride, cadmium, nickel, arsenic, etc. from effluent water. Benefits behind ultrafiltration process are that the operation is quite simple, the removal efficiency is high as compared to some other methods of removal and it is reliable. Polyamide membranes have been selected for the present study on rejection of Cr(VI) from feed solution. The objective of the current work is to examine the rejection of Cr(VI) from aqueous feed solutions by flat sheet UF5kDa membranes with different parameters such as pressure, feed concentration and pH of the feed. The experiments revealed that with increasing pressure, the removal efficiency of Cr(VI) is increased. Also, the effect of pH of feed solution, the initial dosage of chromium in the feed solution has been studied. The membrane has been characterized by FTIR, SEM and AFM before and after the run. The mass transfer coefficients have been estimated. Membrane transport parameters have been calculated and have been found to be in a good correlation with the applied model.

Keywords: heavy metal removal, membrane process, waste water treatment, ultrafiltration

Procedia PDF Downloads 127
13376 Early Education Assessment Methods

Authors: Anantdeep Kaur, Sharanjeet Singh

Abstract:

Early childhood education and assessment of children is a very essential tool that helps them in their growth and development. Techniques should be developed, and tools should be created in this field as it is a very important learning phase of life. Some information and sources are included for student assessment to provide a record of growth in all developmental areas cognitive, physical, Language, social-emotional, and approaches to learning. As an early childhood educator, it is very important to identify children who need special support and counseling to improve them because they are not mentally mature to discuss with the teacher their problems and needs. It is the duty and responsibility of the educator to assess children from their body language, behavior, and their routine actions about their skills that can be improved and which can take them forward in their future life. And also, children should be assessed with their weaker points because this is the right time to correct them, and they be improved with certain methods and tools by working on them constantly. Observing children regularly with all their facets of development, including intellectual, linguistic, social-emotional, and physical development. Every day, a physical education class should be regulated to check their physical growth activities, which can help to assess their physical activeness and motor abilities. When they are outside on the playgrounds, it is very important to instill environmental understanding among them so that they should know that they are very part of this nature, and it will help them to be one with the universe rather than feeling themselves individually. This technique assists them in living their childhood full of energy all the time. All types of assessments have unique purposes. It is important first to determine what should be measured, then find the program that best assesses those.

Keywords: special needs, motor ability, environmental understanding, physical development

Procedia PDF Downloads 84
13375 Exploring Time-Series Phosphoproteomic Datasets in the Context of Network Models

Authors: Sandeep Kaur, Jenny Vuong, Marcel Julliard, Sean O'Donoghue

Abstract:

Time-series data are useful for modelling as they can enable model-evaluation. However, when reconstructing models from phosphoproteomic data, often non-exact methods are utilised, as the knowledge regarding the network structure, such as, which kinases and phosphatases lead to the observed phosphorylation state, is incomplete. Thus, such reactions are often hypothesised, which gives rise to uncertainty. Here, we propose a framework, implemented via a web-based tool (as an extension to Minardo), which given time-series phosphoproteomic datasets, can generate κ models. The incompleteness and uncertainty in the generated model and reactions are clearly presented to the user via the visual method. Furthermore, we demonstrate, via a toy EGF signalling model, the use of algorithmic verification to verify κ models. Manually formulated requirements were evaluated with regards to the model, leading to the highlighting of the nodes causing unsatisfiability (i.e. error causing nodes). We aim to integrate such methods into our web-based tool and demonstrate how the identified erroneous nodes can be presented to the user via the visual method. Thus, in this research we present a framework, to enable a user to explore phosphorylation proteomic time-series data in the context of models. The observer can visualise which reactions in the model are highly uncertain, and which nodes cause incorrect simulation outputs. A tool such as this enables an end-user to determine the empirical analysis to perform, to reduce uncertainty in the presented model - thus enabling a better understanding of the underlying system.

Keywords: κ-models, model verification, time-series phosphoproteomic datasets, uncertainty and error visualisation

Procedia PDF Downloads 237
13374 Evaluation of DNA Microarray System in the Identification of Microorganisms Isolated from Blood

Authors: Merih Şimşek, Recep Keşli, Özgül Çetinkaya, Cengiz Demir, Adem Aslan

Abstract:

Bacteremia is a clinical entity with high morbidity and mortality rates when immediate diagnose, or treatment cannot be achieved. Microorganisms which can cause sepsis or bacteremia are easily isolated from blood cultures. Fifty-five positive blood cultures were included in this study. Microorganisms in 55 blood cultures were isolated by conventional microbiological methods; afterwards, microorganisms were defined in terms of the phenotypic aspects by the Vitek-2 system. The same microorganisms in all blood culture samples were defined in terms of genotypic aspects again by Multiplex-PCR DNA Low-Density Microarray System. At the end of the identification process, the DNA microarray system’s success in identification was evaluated based on the Vitek-2 system. The Vitek-2 system and DNA Microarray system were able to identify the same microorganisms in 53 samples; on the other hand, different microorganisms were identified in the 2 blood cultures by DNA Microarray system. The microorganisms identified by Vitek-2 system were found to be identical to 96.4 % of microorganisms identified by DNA Microarrays system. In addition to bacteria identified by Vitek-2, the presence of a second bacterium has been detected in 5 blood cultures by the DNA Microarray system. It was identified 18 of 55 positive blood culture as E.coli strains with both Vitek 2 and DNA microarray systems. The same identification numbers were found 6 and 8 for Acinetobacter baumanii, 10 and 10 for K.pneumoniae, 5 and 5 for S.aureus, 7 and 11 for Enterococcus spp, 5 and 5 for P.aeruginosa, 2 and 2 for C.albicans respectively. According to these results, DNA Microarray system requires both a technical device and experienced staff support; besides, it requires more expensive kits than Vitek-2. However, this method should be used in conjunction with conventional microbiological methods. Thus, large microbiology laboratories will produce faster, more sensitive and more successful results in the identification of cultured microorganisms.

Keywords: microarray, Vitek-2, blood culture, bacteremia

Procedia PDF Downloads 331
13373 Intertextuality in Choreography: Investigation of Text and Movements in Making Choreography

Authors: Muhammad Fairul Azreen Mohd Zahid

Abstract:

Speech, text, and movement intensify aspects of creating choreography by connecting with emotional entanglements, tradition, literature, and other texts. This research focuses on the practice as research that will prioritise the choreography process as an inquiry approach. With the driven context, the study intervenes in critical conjunctions of choreographic theory, bringing together new reflections on the moving body, spaces of action, as well as intertextuality between text and movements in making choreography. Throughout the process, the researcher will introduce the level of deliberation from speech through movements and text to express emotion within a narrative context of an “illocutionary act.” This practice as research will produce a different meaning from the “utterance text” to “utterance movements” in the perspective of speech acts theory by J.L Austin based on fragmented text from “pidato adat” which has been used as opening speech in Randai. Looking at the theory of deconstruction by Jacque Derrida also will give a different meaning from the text. Nevertheless, the process of creating the choreography will also help to lay the basic normative structure implicit in “constative” (statement text/movement) and “performative” (command text/movement). Through this process, the researcher will also look at several methods of using text from two works by Joseph Gonzales, “Becoming King-The Pakyung Revisited” and Crystal Pite's “The Statement,” as references to produce different methods in making choreography. The perspective from the semiotic foundation will support how occurrences within dance discourses as texts through a semiotic lens. The method used in this research is qualitative, which includes an interview and simulation of the concept to get an outcome.

Keywords: intertextuality, choreography, speech act, performative, deconstruction

Procedia PDF Downloads 81
13372 An Integrated Approach for Optimizing Drillable Parameters to Increase Drilling Performance: A Real Field Case Study

Authors: Hamidoddin Yousife

Abstract:

Drilling optimization requires a prediction of drilling rate of penetration (ROP) since it provides a significant reduction in drilling costs. There are several factors that can have an impact on the ROP, both controllable and uncontrollable. Numerous drilling penetration rate models have been considered based on drilling parameters. This papers considered the effect of proper drilling parameter selection such as bit, Mud Type, applied weight on bit (WOB), Revolution per minutes (RPM), and flow rate on drilling optimization and drilling cost reduction. A predicted analysis is used in real-time drilling performance to determine the optimal drilling operation. As a result of these modeling studies, the real data collected from three directional wells at Azadegan oil fields, Iran, was verified and adjusted to determine the drillability of a specific formation. Simulation results and actual drilling results show significant improvements in inaccuracy. Once simulations had been validated, optimum drilling parameters and equipment specifications were determined by varying weight on bit (WOB), rotary speed (RPM), hydraulics (hydraulic pressure), and bit specification for each well until the highest drilling rate was achieved. To evaluate the potential operational and economic benefits of optimizing results, a qualitative and quantitative analysis of the data was performed.

Keywords: drlling, cost, optimization, parameters

Procedia PDF Downloads 151
13371 Analysis and Modeling of Vibratory Signals Based on LMD for Rolling Bearing Fault Diagnosis

Authors: Toufik Bensana, Slimane Mekhilef, Kamel Tadjine

Abstract:

The use of vibration analysis has been established as the most common and reliable method of analysis in the field of condition monitoring and diagnostics of rotating machinery. Rolling bearings cover a broad range of rotary machines and plays a crucial role in the modern manufacturing industry. Unfortunately, the vibration signals collected from a faulty bearing are generally non-stationary, nonlinear and with strong noise interference, so it is essential to obtain the fault features correctly. In this paper, a novel numerical analysis method based on local mean decomposition (LMD) is proposed. LMD decompose the signal into a series of product functions (PFs), each of which is the product of an envelope signal and a purely frequency modulated FM signal. The envelope of a PF is the instantaneous amplitude (IA) and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. After that, the fault characteristic frequency of the roller bearing can be extracted by performing spectrum analysis to the instantaneous amplitude of PF component containing dominant fault information. the results show the effectiveness of the proposed technique in fault detection and diagnosis of rolling element bearing.

Keywords: fault diagnosis, local mean decomposition, rolling element bearing, vibration analysis

Procedia PDF Downloads 396