Search results for: statistical model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19610

Search results for: statistical model

10130 Numerical Calculation and Analysis of Fine Echo Characteristics of Underwater Hemispherical Cylindrical Shell

Authors: Hongjian Jia

Abstract:

A finite-length cylindrical shell with a spherical cap is a typical engineering approximation model of actual underwater targets. The research on the omni-directional acoustic scattering characteristics of this target model can provide a favorable basis for the detection and identification of actual underwater targets. The elastic resonance characteristics of the target are the results of the comprehensive effect of the target length, shell-thickness ratio and materials. Under the conditions of different materials and geometric dimensions, the coincidence resonance characteristics of the target have obvious differences. Aiming at this problem, this paper obtains the omni-directional acoustic scattering field of the underwater hemispherical cylindrical shell by numerical calculation and studies the influence of target geometric parameters (length, shell-thickness ratio) and material parameters on the coincidence resonance characteristics of the target in turn. The study found that the formant interval is not a stable value and changes with the incident angle. Among them, the formant interval is less affected by the target length and shell-thickness ratio and is significantly affected by the material properties, which is an effective feature for classifying and identifying targets of different materials. The quadratic polynomial is utilized to fully fit the change relationship between the formant interval and the angle. The results show that the three fitting coefficients of the stainless steel and aluminum targets are significantly different, which can be used as an effective feature parameter to characterize the target materials.

Keywords: hemispherical cylindrical shell;, fine echo characteristics;, geometric and material parameters;, formant interval

Procedia PDF Downloads 103
10129 Analysis study According Some of Physical and Mechanical Variables for Joint Wrist Injury

Authors: Nabeel Abdulkadhim Athab

Abstract:

The purpose of this research is to conduct a comparative study according analysis of programmed to some of physical and mechanical variables for joint wrist injury. As it can be through this research to distinguish between the amount of variation in the work of the joint after sample underwent rehabilitation program to improve the effectiveness of the joint and naturally restore its effectiveness. Supposed researcher that there is statistically significant differences between the results of the tests pre and post the members research sample, as a result of submission the sample to the program of rehabilitation, which led to the development of muscle activity that are working on wrist joint and this is what led to note the differences between the results of the tests pre and post. The researcher used the descriptive method. The research sample included (6) of injured players in the wrist joint, as the average age (21.68) and standard deviation (1.13) either length average (178cm) and standard deviation (2.08). And the sample as evidenced homogeneous among themselves. And where the data were collected, introduced in program for statistical processing to get to the most important conclusions and recommendations and that the most important: 1-The commitment of the sample program the qualifying process variables studied in the search for the heterogeneity of study activity and effectiveness of wrist joint for injured players. 2-The analysis programmed a high accuracy in the measurement of the research variables, and which led to the possibility of discrimination into account differences in motor ability camel and injured in the wrist joint. To search recommendations including: 1-The use of computer systems in the scientific research for the possibility of obtaining accurate research results. 2-Programming exercises rehabilitation according to an expert system for possible use by patients without reference to the person processor.

Keywords: analysis of joint wrist injury, physical and mechanical variables, wrist joint, wrist injury

Procedia PDF Downloads 430
10128 Variation In Gastrocnemius and Hamstring Muscle Activity During Peak Knee Flexor Torque After Anterior Cruciate Ligament Reconstruction with Hamstring Graft

Authors: Luna Sequier, Florian Forelli, Maude Traulle, Amaury Vandebrouck, Pascal Duffiet, Louis Ratte, Jean Mazeas

Abstract:

The study's objective is to compare the muscular activity of the flexor knee muscle in patients who underwent an anterior cruciate ligament reconstruction with hamstring autograft and the individuals who have not undergone surgery. Methods: The participants were divided into two groups: a healthy group and an experimental group who had undergone an anterior cruciate ligament reconstruction with a hamstring graft. All participants had to perform a knee flexion strength test on an isokinetic dynamometer. The medial Gastrocnemius, lateral Gastrocnemius, Biceps femoris, and medial Hamstring muscle activity were measured during this test. Each group’s mean muscle activity was tested with statistical analysis, and a muscle activity ratio of gastrocnemius and hamstring muscles was calculated Results: The results showed a significant difference in activity of the medial gastrocnemius (p = 0,004901), the biceps femoris (p = 5,394.10-6), and the semitendinosus muscles (p = 1,822.10-6), with a higher Biceps femoris and Semitendinosus activity for the experimental group. It is however noticeable that inter-subject differences were important. Conclusion: This study has shown a difference in the gastrocnemius and hamstring muscle activity between patients who underwent an anterior cruciate ligament reconstruction surgery and healthy participants. With further results, this could show a modification of muscle activity patterns after surgery which could lead to compensatory behaviors at a return to sport and eventually explain a higher injury risk for our patients.

Keywords: anterior cruciate ligament, electromyography, muscle activity, physiotherapy

Procedia PDF Downloads 236
10127 Impact of Applying Bag House Filter Technology in Cement Industry on Ambient Air Quality - Case Study: Alexandria Cement Company

Authors: Haggag H. Mohamed, Ghatass F. Zekry, Shalaby A. Elsayed

Abstract:

Most sources of air pollution in Egypt are of anthropogenic origin. Alexandria Governorate is located at north of Egypt. The main contributing sectors of air pollution in Alexandria are industry, transportation and area source due to human activities. Alexandria includes more than 40% of the industrial activities in Egypt. Cement manufacture contributes a significant amount to the particulate pollution load. Alexandria Portland Cement Company (APCC) surrounding was selected to be the study area. APCC main kiln stack Total Suspended Particulate (TSP) continuous monitoring data was collected for assessment of dust emission control technology. Electro Static Precipitator (ESP) was fixed on the cement kiln since 2002. The collected data of TSP for first quarter of 2012 was compared to that one in first quarter of 2013 after installation of new bag house filter. In the present study, based on these monitoring data and metrological data a detailed air dispersion modeling investigation was carried out using the Industrial Source Complex Short Term model (ISC3-ST) to find out the impact of applying new bag house filter control technology on the neighborhood ambient air quality. The model results show a drastic reduction of the ambient TSP hourly average concentration from 44.94μg/m3 to 5.78μg/m3 which assures the huge positive impact on the ambient air quality by applying bag house filter technology on APCC cement kiln

Keywords: air pollution modeling, ambient air quality, baghouse filter, cement industry

Procedia PDF Downloads 265
10126 A Bayesian Population Model to Estimate Reference Points of Bombay-Duck (Harpadon nehereus) in Bay of Bengal, Bangladesh Using CMSY and BSM

Authors: Ahmad Rabby

Abstract:

The demographic trend analyses of Bombay-duck from time series catch data using CMSY and BSM for the first time in Bangladesh. During 2000-2018, CMSY indicates average lowest production in 2000 and highest in 2018. This has been used in the estimation of prior biomass by the default rules. Possible 31030 viable trajectories for 3422 r-k pairs were found by the CMSY analysis and the final estimates for intrinsic rate of population increase (r) was 1.19 year-1 with 95% CL= 0.957-1.48 year-1. The carrying capacity(k) of Bombay-duck was 283×103 tons with 95% CL=173×103 - 464×103 tons and MSY was 84.3×103tons year-1, 95% CL=49.1×103-145×103 tons year-1. Results from Bayesian state-space implementation of the Schaefer production model (BSM) using catch & CPUE data, found catchabilitiy coefficient(q) was 1.63 ×10-6 from lcl=1.27×10-6 to ucl=2.10×10-6 and r= 1.06 year-1 with 95% CL= 0.727 - 1.55 year-1, k was 226×103 tons with 95% CL=170×103-301×103 tons and MSY was 60×103 tons year-1 with 95% CL=49.9 ×103- 72.2 ×103 tons year-1. Results for Bombay-duck fishery management based on BSM assessment from time series catch data illustrated that, Fmsy=0.531 with 95% CL =0.364 - 0.775 (if B > 1/2 Bmsy then Fmsy =0.5r); Fmsy=0.531 with 95% CL =0.364-0.775 (r and Fmsy are linearly reduced if B < 1/2Bmsy). Biomass in 2018 was 110×103 tons with 2.5th to 97.5th percentile=82.3-155×103 tons. Relative biomass (B/Bmsy) in last year was 0.972 from 2.5th percentile to 97.5th percentile=0.728 -1.37. Fishing mortality in last year was 0.738 with 2.5th-97.5th percentile=0.525-1.37. Exploitation F/Fmsy was 1.39, from 2.5th to 97.5th percentile it was 0.988 -1.86. The biological reference points of B/BMSY was smaller than 1.0, while F/FMSY was higher than 1.0 revealed an over-exploitation of the fishery, indicating that more conservative management strategies are required for Bombay-duck fishery.

Keywords: biological reference points, catchability coefficient, carrying capacity, intrinsic rate of population increase

Procedia PDF Downloads 124
10125 The Targeting Logic of Terrorist Groups in the Sahel

Authors: Mathieu Bere

Abstract:

Al-Qaeda and Islamic State-affiliated groups such as Ja’amat Nusra al Islam Wal Muslimim (JNIM) and the Islamic State-Greater Sahara Faction, which is now part of the Boko Haram splinter group, Islamic State in West Africa, were responsible, between 2018 and 2020, for at least 1.333 violent incidents against both military and civilian targets, including the assassination and kidnapping for ransom of Western citizens in Mali, Burkina Faso and Niger, the Central Sahel. Protecting civilians from the terrorist violence that is now spreading from the Sahel to the coastal countries of West Africa has been very challenging, mainly because of the many unknowns that surround the perpetrators. To contribute to a better protection of civilians in the region, this paper aims to shed light on the motivations and targeting logic of jihadist perpetrators of terrorist violence against civilians in the central Sahel region. To that end, it draws on relevant secondary data retrieved from datasets, the media, and the existing literature, but also on primary data collected through interviews and surveys in Burkina Faso. An analysis of the data with the support of qualitative and statistical analysis software shows that military and rational strategic motives, more than purely ideological or religious motives, have been the main drivers of terrorist violence that strategically targeted government symbols and representatives as well as local leaders in the central Sahel. Behind this targeting logic, the jihadist grand strategy emerges: wiping out the Western-inspired legal, education and governance system in order to replace it with an Islamic, sharia-based political, legal, and educational system.

Keywords: terrorism, jihadism, Sahel, targeting logic

Procedia PDF Downloads 81
10124 Modeling Loads Applied to Main and Crank Bearings in the Compression-Ignition Two-Stroke Engine

Authors: Marcin Szlachetka, Mateusz Paszko, Grzegorz Baranski

Abstract:

This paper discusses the AVL EXCITE Designer simulation research into loads applied to main and crank bearings in the compression-ignition two-stroke engine. There was created a model of engine lubrication system which covers the part of this system related to particular nodes of a bearing system, i.e. a connection of main bearings in an engine block with a crankshaft, a connection of crank pins with a connecting rod. The analysis focused on the load given as a distribution of hydrodynamic oil film pressure corresponding different values of radial internal clearance. There was also studied the impact of gas force on minimal oil film thickness in main and crank bearings versus crankshaft rotational speed. Our model calculates oil film parameters, an oil film pressure distribution, an oil temperature change and dimensions of bearings as well as an oil temperature distribution on surfaces of bearing seats. Accordingly, it was possible to select, for example, a correct clearance for each of the node bearings. The research was performed for several values of engine crankshaft speed ranging from 800 RPM to 4000 RPM. Bearing oil pressure was changed according to engine speed ranging between 1 bar and 5 bar and an oil temperature of 90°C. The main bearing clearances made initially for the calculation and research were: 0.015 mm, 0.025 mm, 0.035 mm, 0.05 mm, 0.1 mm. The oil used for the research corresponded the SAE 5W-40 classification. The paper presents the selected research results referring to certain specific operating points and bearing radial internal clearances. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: crank bearings, diesel engine, oil film, two-stroke engine

Procedia PDF Downloads 207
10123 Response of Diaphragmatic Excursion to Inspiratory Muscle Trainer Post Thoracotomy

Authors: H. M. Haytham, E. A. Azza, E.S. Mohamed, E. G. Nesreen

Abstract:

Thoracotomy is a great surgery that has serious pulmonary complications, so purpose of this study was to determine the response of diaphragmatic excursion to inspiratory muscle trainer post thoracotomy. Thirty patients of both sexes (16 men and 14 women) with age ranged from 20 to 40 years old had done thoracotomy participated in this study. The practical work was done in cardiothoracic department, Kasr-El-Aini hospital at faculty of medicine for individuals 3 days Post operatively. Patients were assigned into two groups: group A (study group) included 15 patients (8 men and 7 women) who received inspiratory muscle training by using inspiratory muscle trainer for 20 minutes and routine chest physiotherapy (deep breathing, cough and early ambulation) twice daily, 3 days per week for one month. Group B (control group) included 15 patients (8 men and 7 women) who received the routine chest physiotherapy only (deep breathing, cough and early ambulation) twice daily, 3 days per week for one month. Ultrasonography was used to evaluate the changes in diaphragmatic excursion before and after training program. Statistical analysis revealed a significant increase in diaphragmatic excursion in the study group (59.52%) more than control group (18.66%) after using inspiratory muscle trainer post operatively in patients post thoracotomy. It was concluded that the inspiratory muscle training device increases diaphragmatic excursion in patients post thoracotomy through improving inspiratory muscle strength and improving mechanics of breathing and using of inspiratory muscle trainer as a method of physical therapy rehabilitation to reduce post-operative pulmonary complications post thoracotomy.

Keywords: diaphragmatic excursion, inspiratory muscle trainer, ultrasonography, thoracotomy

Procedia PDF Downloads 316
10122 Powerful Media: Reflection of Professional Audience

Authors: Hamide Farshad, Mohammadreza Javidi Abdollah Zadeh Aval

Abstract:

As a result of the growing penetration of the media into human life, a new role under the title of "audience" is defined in the social life .A kind of role which is dramatically changed since its formation. This article aims to define the audience position in the new media equations which is concluded to the transformation of the media role. By using the Library and Attributive method to study the history, the evolutionary outlook to the audience and the recognition of the audience and the media relation in the new media context is studied. It was perceived in past that public communication would result in receiving the audience. But after the emergence of the interactional media and transformation in the audience social life, a new kind of public communication is formed, and also the imaginary picture of the audience is replaced by the audience impact on the communication process. Part of this impact can be seen in the form of feedback which is one of the public communication elements. In public communication, the audience feedback is completely accepted. But in many cases, and along with the audience feedback, the media changes its direction; this direction shift is known as media feedback. At this state, the media and the audience are both doers and consistently change their positions in an interaction. With the greater number of the audience and the media, this process has taken a new role, and the role of this doer is sometimes taken by an audience while influencing another audience, or a media while influencing another media. In this article, this multiple public communication process is shown through representing a model under the title of ”The bilateral influence of the audience and the media.” Based on this model, the audience and the media power are not the two sides of a coin, and as a result, by accepting these two as the doers, the bilateral power of the audience and the media will be complementary to each other. Also more, the compatibility between the media and the audience is analyzed in the bilateral and interactional relation hypothesis, and by analyzing the action law hypothesis, the dos and don’ts of this role are defined, and media is obliged to know and accept them in order to be able to survive. They also have a determining role in the strategic studies of a media.

Keywords: audience, effect, media, interaction, action laws

Procedia PDF Downloads 483
10121 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 120
10120 The Chemical Transport Mechanism of Emitter Micro-Particles in Tungsten Electrode: A Metallurgical Study

Authors: G. Singh, H.Schuster, U. Füssel

Abstract:

The stability of electric arc and durability of electrode tip used in Tungsten Inert Gas (TIG) welding demand a metallurgical study about the chemical transport mechanism of emitter oxide particles in tungsten electrode during its real welding conditions. The tungsten electrodes doped with emitter oxides of rare earth oxides such as La₂O₃, Th₂O₃, Y₂O₃, CeO₂ and ZrO₂ feature a comparatively lower work function than tungsten and thus have superior emission characteristics due to lesser surface temperature of the cathode. The local change in concentration of these emitter particles in tungsten electrode due to high temperature diffusion (chemical transport) can change its functional properties like electrode temperature, work function, electron emission, and stability of the electrode tip shape. The resulting increment in tip surface temperature results in the electrode material loss. It was also observed that the tungsten recrystallizes to large grains at high temperature. When the shape of grain boundaries are granular in shape, the intergranular diffusion of oxide emitter particles takes more time to reach the electrode surface. In the experimental work, the microstructure of the used electrode's tip surface will be studied by scanning electron microscope and reflective X-ray technique in order to gauge the extent of the diffusion and chemical reaction of emitter particles. Besides, a simulated model is proposed to explain the effect of oxide particles diffusion on the electrode’s microstructure, electron emission characteristics, and electrode tip erosion. This model suggests metallurgical modifications in tungsten electrode to enhance its erosion resistance.

Keywords: rare-earth emitter particles, temperature-dependent diffusion, TIG welding, Tungsten electrode

Procedia PDF Downloads 183
10119 The Development of E-Commerce in Mexico: An Econometric Analysis

Authors: Alma Lucero Ortiz, Mario Gomez

Abstract:

Technological advances contribute to the well-being of humanity by allowing man to perform in a more efficient way. Technology offers tangible advantages to countries with the adoption of information technologies, communication, and the Internet in all social and productive sectors. The Internet is a networking infrastructure that allows the communication of people throughout the world, exceeding the limits of time and space. Nowadays the internet has changed the way of doing business leading to a digital economy. In this way, e-commerce has emerged as a commercial transaction conducted over the Internet. For this inquiry e-commerce is seen as a source of economic growth for the country. Thereby, these research aims to answer the research question, which are the main variables that have affected the development of e-commerce in Mexico. The research includes a period of study from 1990 to 2017. This inquiry aims to get insight on how the independent variables influence the e-commerce development. The independent variables are information infrastructure construction, urbanization level, economic level, technology level, human capital level, educational level, standards of living, and price index. The results suggest that the independent variables have an impact on development of the e-commerce in Mexico. The present study is carried out in five parts. After the introduction, in the second part, a literature review about the main qualitative and quantitative studies to measure the variables subject to the study is presented. After, an empirical study is applied through time series data, and to process the data an econometric model is performed. In the fourth part, the analysis and discussion of results are presented, and finally, some conclusions are included.

Keywords: digital economy, e-commerce, econometric model, economic growth, internet

Procedia PDF Downloads 232
10118 Predictions of Thermo-Hydrodynamic State for Single and Three Pads Gas Foil Bearings Operating at Steady-State Based on Multi-Physics Coupling Computer Aided Engineering Simulations

Authors: Tai Yuan Yu, Pei-Jen Wang

Abstract:

Oil-free turbomachinery is considered one of the critical technologies for future green power generation systems as rotor machinery systems. Oil-free technology allows clean, compact, and maintenance-free working, and gas foil bearings, abbreviated as GFBs, are important for the technology. Since the first applications in the auxiliary power units and air cycle machines in the 1970s, obvious improvement has been created to the computational models for dynamic rotor behavior. However, many technical issues are still poorly understood or remain unsolved, and some of those are thermal management and the pattern of how pressure will be distributed in bearing clearance. This paper presents a three-dimensional, abbreviated as 3D, fluid-structure interaction model of single pad foil bearings and three pad foil bearings to predict bearing working behavior that researchers could compare characteristics of those. The coupling analysis model involves dynamic working characteristics applied to all the gas film and mechanical structures. Therefore, the elastic deformation of foil structure and the hydrodynamic pressure of gas film can both be calculated by a finite element method program. As a result, the temperature distribution pattern could also be iteratively solved by coupling analysis. In conclusion, the working fluid state in a gas film of various pad forms of bearings working characteristic at constant rotational speed for both can be solved for comparisons with the experimental results.

Keywords: fluid-structure interaction, multi-physics simulations, gas foil bearing, oil-free, transient thermo-hydrodynamic

Procedia PDF Downloads 160
10117 Student Feedback of a Major Curricular Reform Based on Course Integration and Continuous Assessment in Electrical Engineering

Authors: Heikki Valmu, Eero Kupila, Raisa Vartia

Abstract:

A major curricular reform was implemented in Metropolia UAS in 2014. The teaching was to be based on larger course entities and collaborative pedagogy. The most thorough reform was conducted in the department of electrical engineering and automation technology. It has been already shown that the reform has been extremely successful with respect to student progression and drop-out rate. The improvement of the results has been much more significant in this department compared to the other engineering departments making only minor pedagogical changes. In the beginning of the spring term of 2017, a thorough student feedback project was conducted in the department. The study consisted of thirty questions about the implementation of the curriculum, the student workload and other matters related to student satisfaction. The reply rate was more than 40%. The students were divided to four different categories: first year students [cat.1] and students of all the three different majors [categories 2-4]. These categories were found valid since all the students have the same course structure in the first two semesters after which they may freely select the major. All staff members are divided into four teams respectively. The curriculum consists of consecutive 15 credit (ECTS) courses each taught by a group of teachers (3-5). There are to be no end exams and continuous assessment is to be employed. In 2014 the different teacher groups were encouraged to employ innovatively different assessment methods within the given specs. One of these methods has been since used in categories 1 and 2. These students have to complete a number of compulsory tasks each week to pass the course and the actual grade is defined by a smaller number of tests throughout the course. The tasks vary from homework assignments, reports and laboratory exercises to larger projects and the actual smaller tests are usually organized during the regular lecture hours. The teachers of the other two majors have been pedagogically more conservative. The student progression has been better in categories 1 and 2 compared to categories 3 and 4. One of the main goals of this survey was to analyze the reasons for the difference and the assessment methods in detail besides the general student satisfaction. The results show that in the categories following more strictly the specified assessment model much more versatile assessment methods are used and the basic spirit of the new pedagogy is followed. Also, the student satisfaction is significantly better in categories 1 and 2. It may be clearly stated that continuous assessment and teacher cooperation improve the learning outcomes, student progression as well as student satisfaction. Too much academic freedom seems to lead to worse results [cat 3 and 4]. A standardized assessment model is launched for all students in autumn 2017. This model is different from the one used so far in categories 1 and 2 allowing more flexibility to teacher groups, but it will force all the teacher groups to follow the general rules in order to improve the results and the student satisfaction further.

Keywords: continuous assessment, course integration, curricular reform, student feedback

Procedia PDF Downloads 201
10116 Dissolution Kinetics of Chevreul’s Salt in Ammonium Cloride Solutions

Authors: Mustafa Sertçelik, Turan Çalban, Hacali Necefoğlu, Sabri Çolak

Abstract:

In this study, Chevreul’s salt solubility and its dissolution kinetics in ammonium chloride solutions were investigated. Chevreul’s salt that we used in the studies was obtained by using the optimum conditions (ammonium sulphide concentration; 0,4 M, copper sulphate concentration; 0,25 M, temperature; 60°C, stirring speed; 600 rev/min, pH; 4 and reaction time; 15 mins) determined by T. Çalban et al. Chevreul’s salt solubility in ammonium chloride solutions and the kinetics of dissolution were investigated. The selected parameters that affect solubility were reaction temperature, concentration of ammonium chloride, stirring speed, and solid/liquid ratio. Correlation of experimental results had been achieved using linear regression implemented in the statistical package program statistica. The effect of parameters on Chevreul’s salt solubility was examined and integrated rate expression of dissolution rate was found using kinetic models in solid-liquid heterogeneous reactions. The results revealed that the dissolution rate of Chevreul’s salt was decreasing while temperature, concentration of ammonium chloride and stirring speed were increasing. On the other hand, dissolution rate was found to be decreasing with the increase of solid/liquid ratio. Based on result of the applications of the obtained experimental results to the kinetic models, we can deduce that Chevreul’s salt dissolution rate is controlled by diffusion through the ash (or product layer). Activation energy of the reaction of dissolution was found as 74.83 kJ/mol. The integrated rate expression along with the effects of parameters on Chevreul's salt solubility was found to be as follows: 1-3(1-X)2/3+2(1-X)= [2,96.1013.(CA)3,08 .(S/L)-038.(W)1,23 e-9001,2/T].t

Keywords: Chevreul's salt, copper, ammonium chloride, ammonium sulphide, dissolution kinetics

Procedia PDF Downloads 303
10115 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 121
10114 Iris Recognition Based on the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: iris recognition, contrast stretching, gradient features, texture features, Euclidean metric

Procedia PDF Downloads 331
10113 Stress-Strain Relation for Human Trabecular Bone Based on Nanoindentation Measurements

Authors: Marek Pawlikowski, Krzysztof Jankowski, Konstanty Skalski, Anna Makuch

Abstract:

Nanoindentation or depth-sensing indentation (DSI) technique has proven to be very useful to measure mechanical properties of various tissues at a micro-scale. Bone tissue, both trabecular and cortical one, is one of the most commonly tested tissues by means of DSI. Most often such tests on bone samples are carried out to compare the mechanical properties of lamellar and interlamellar bone, osteonal bone as well as compact and cancellous bone. In the paper, a relation between stress and strain for human trabecular bone is presented. The relation is based on the results of nanoindentation tests. The formulation of a constitutive model for human trabecular bone is based on nanoindentation tests. In the study, the approach proposed by Olivier-Pharr is adapted. The tests were carried out on samples of trabecular tissue extracted from human femoral heads. The heads were harvested during surgeries of artificial hip joint implantation. Before samples preparation, the heads were kept in 95% alcohol in temperature 4 Celsius degrees. The cubic samples cut out of the heads were stored in the same conditions. The dimensions of the specimens were 25 mm x 25 mm x 20 mm. The number of 20 samples have been tested. The age range of donors was between 56 and 83 years old. The tests were conducted with the indenter spherical tip of the diameter 0.200 mm. The maximum load was P = 500 mN and the loading rate 500 mN/min. The data obtained from the DSI tests allows one only to determine bone behoviour in terms of nanoindentation force vs. nanoindentation depth. However, it is more interesting and useful to know the characteristics of trabecular bone in the stress-strain domain. This allows one to simulate trabecular bone behaviour in a more realistic way. The stress-strain curves obtained in the study show relation between the age and the mechanical behaviour of trabecular bone. It was also observed that the bone matrix of trabecular tissue indicates an ability of energy absorption.

Keywords: constitutive model, mechanical behaviour, nanoindentation, trabecular bone

Procedia PDF Downloads 217
10112 Digitalization and High Audit Fees: An Empirical Study Applied to US Firms

Authors: Arpine Maghakyan

Abstract:

The purpose of this paper is to study the relationship between the level of industry digitalization and audit fees, especially, the relationship between Big 4 auditor fees and industry digitalization level. On the one hand, automation of business processes decreases internal control weakness and manual mistakes; increases work effectiveness and integrations. On the other hand, it may cause serious misstatements, high business risks or even bankruptcy, typically in early stages of automation. Incomplete automation can bring high audit risk especially if the auditor does not fully understand client’s business automation model. Higher audit risk consequently will cause higher audit fees. Higher audit fees for clients with high automation level are more highlighted in Big 4 auditor’s behavior. Using data of US firms from 2005-2015, we found that industry level digitalization is an interaction for the auditor quality on audit fees. Moreover, the choice of Big4 or non-Big4 is correlated with client’s industry digitalization level. Big4 client, which has higher digitalization level, pays more than one with low digitalization level. In addition, a high-digitalized firm that has Big 4 auditor pays higher audit fee than non-Big 4 client. We use audit fees and firm-specific variables from Audit Analytics and Compustat databases. We analyze collected data by using fixed effects regression methods and Wald tests for sensitivity check. We use fixed effects regression models for firms for determination of the connections between technology use in business and audit fees. We control for firm size, complexity, inherent risk, profitability and auditor quality. We chose fixed effects model as it makes possible to control for variables that have not or cannot be measured.

Keywords: audit fees, auditor quality, digitalization, Big4

Procedia PDF Downloads 295
10111 Allergenic Potential of Airborne Algae Isolated from Malaysia

Authors: Chu Wan-Loy, Kok Yih-Yih, Choong Siew-Ling

Abstract:

The human health risks due to poor air quality caused by a wide array of microorganisms have attracted much interest. Airborne algae have been reported as early as 19th century and they can be found in the air of tropic and warm atmospheres. Airborne algae normally originate from water surfaces, soil, trees, buildings and rock surfaces. It is estimated that at least 2880 algal cells are inhaled per day by human. However, there are relatively little data published on airborne algae and its related adverse health effects except sporadic reports of algae associated clinical allergenicity. A collection of airborne algae cultures has been established following a recent survey on the occurrence of airborne algae in indoor and outdoor environments in Kuala Lumpur. The aim of this study was to investigate the allergenic potential of the isolated airborne green and blue-green algae, namely Scenedesmus sp., Cylindrospermum sp. and Hapalosiphon sp.. The suspensions of freeze-dried airborne algae were adminstered into balb-c mice model through intra-nasal route to determine their allergenic potential. Results showed that Scenedesmus sp. (1 mg/mL) increased the systemic Ig E levels in mice by 3-8 fold compared to pre-treatment. On the other hand, Cylindrospermum sp. and Hapalosiphon sp. at similar concentration caused the Ig E to increase by 2-4 fold. The potential of airborne algae causing Ig E mediated type 1 hypersensitivity was elucidated using other immunological markers such as cytokine interleukin (IL)- 4, 5, 6 and interferon-ɣ. When we compared the amount of interleukins in mouse serum between day 0 and day 53 (day of sacrifice), Hapalosiphon sp. (1mg/mL) increased the expression of IL4 and 6 by 8 fold while the Cylindrospermum sp. (1mg/mL) increased the expression of IL4 and IFɣ by 8 and 2 fold respectively. In conclusion, repeated exposure to the three selected airborne algae may stimulate the immune response and generate Ig E in a mouse model.

Keywords: airborne algae, respiratory, allergenic, immune response, Malaysia

Procedia PDF Downloads 235
10110 Hemato-Biochemical Studies on Naturally Infected Camels with Trypanosomiasis

Authors: Khalid Mehmood, Riaz Hussain, Rao Z. Abbas, Tariq Abbas, Abdul Ghaffar, Ahmad J. Sabir

Abstract:

Blood born diseases such as trypanosomiasis have negative impacts on health, production and working efficiency of camels in different camel-rearing areas of the world including Pakistan. In present study blood samples were collected from camels kept at the desert condition of cholistan to estimate the prevalence of trypanosomiasis and hemato-biochemical changes in naturally infected cases. Results showed an overall 9.31% prevalence of trypanosomiasis in camels. Various clinical signs such as pyrexia, occasional shivering, inappetence, urticaria, swelling, lethargy, going down in condition and edema of pads were observed in few cases. The statistical analysis did not show significant association of age and sex with trypanosomiasis. However, results revealed significantly decreased values of total erythrocyte counts, packed cell volume, hemoglobin concentration, mean corpuscular hemoglobin concentration, serum total proteins and albumin while increased values of mean corpuscular volume was recorded in infected animals as compared to healthy. A significant (P<0.01) increased values of total leukocyte count, monocyte, lymphocyte, neutrophils, and eosinophils was recorded in infected animals. Moreover, microscopic examination of blood films obtained from naturally infected cases showed the presence of parasite and various morphological changes in cells such as stomatocyte, hyperchromasia, and polychromasia. Significantly increased values of different hepatic enzymes including alanine aminotransferase (ALT), aspartate aminotransferase (AST) and alkaline phosphatase (ALP) were also recorded.

Keywords: camel, hematological indices, serum enzymes, Trypanosomiasis

Procedia PDF Downloads 522
10109 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales

Authors: Philipp Sommer, Amgad Agoub

Abstract:

The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.

Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning

Procedia PDF Downloads 53
10108 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID

Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis

Abstract:

Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.

Keywords: artificial intelligence, COVID, neural network, machine learning

Procedia PDF Downloads 85
10107 Role of Pro-Inflammatory and Regulatory Cytokines in Pathogenesis of Graves’ Disease in Association with Autoantibody Thyroid and Regulatory FoxP3 T-Cells

Authors: Dwitya Elvira, Eryati Darwin

Abstract:

Background: Graves’ disease (GD) is an autoimmune thyroid disease. Imbalance of Th1/Th2 cells and T-regulatory (Treg)/Th17 cells was thought to play pivotal role in the pathogenesis of GD. Treg FoxP3 produced TGF-β to maintain regulatory function, and Th17 cells produced IL-17 as cytokines that were thought in mediating several autoimmune diseases. The aim of this study is to assess the role of IL-17 and TGF-β in the pathogenesis of GD and to investigate its correlation with Thyroid Stimulating Hormone Receptor Antibody (TRAb) and Treg FoxP3 expression. Method: 30 GD patients and 27 age and sex-matched controls were enrolled in this study. Diagnosis of GD was based on clinical and biochemical of GD. Serum IL-17, TGF-β, TRAb, and FoxP3 were measured by enzyme-linked immunosorbent assay (ELISA). Data were analyzed by using SPSS 21.0 (SPSS Inc.). Spearman rank correlation test was used for assessment of correlation. The statistical significance was accepted as P<0.05. Result: There was no significant correlation between IL-17 and TGF-β serum with expression of FoxP3 level in GD, but there was significant correlation between TGF-β and TRAb serum level (P<0.05). Serum levels of IL-17 and TGF-β were found to be elevated in patient group compared to control, where mean values of IL-17 were 14.43±2.15 pg/mL and TGF-β were 10.44±3.19 pg/mL in patients group; and in control group, level of IL-17 were 7.1±1.45 pg/mL and TGF-β were 4.95±1.35 pg/mL. Conclusion: Serum Il-17 and TGF-β were elevated in GD patients that reflect the role of inflammatory and regulatory cytokines activation in pathogenesis of GD. There was significant correlation between TGF-β and TRAb, revealing that Treg cytokines may play a role in pathogenesis of GD.

Keywords: IL-17, TGF-B, FoxP3, TRAb, Graves’ disease

Procedia PDF Downloads 280
10106 Development of a CFD Model for PCM Based Energy Storage in a Vertical Triplex Tube Heat Exchanger

Authors: Pratibha Biswal, Suyash Morchhale, Anshuman Singh Yadav, Shubham Sanjay Chobe

Abstract:

Energy demands are increasing whereas energy sources, especially non-renewable sources are limited. Due to the intermittent nature of renewable energy sources, it has become the need of the hour to find new ways to store energy. Out of various energy storage methods, latent heat thermal storage devices are becoming popular due to their high energy density per unit mass and volume at nearly constant temperature. This work presents a computational fluid dynamics (CFD) model using ANSYS FLUENT 19.0 for energy storage characteristics of a phase change material (PCM) filled in a vertical triplex tube thermal energy storage system. A vertical triplex tube heat exchanger, just like its name consists of three concentric tubes (pipe sections) for parting the device into three fluid domains. The PCM is filled in the middle domain with heat transfer fluids flowing in the outer and innermost domains. To enhance the heat transfer inside the PCM, eight fins have been incorporated between the internal and external tubes. These fins run radially outwards from the outer-wall of innermost tube to the inner-wall of the middle tube dividing the middle domain (between innermost and middle tube) into eight sections. These eight sections are then filled with a PCM. The validation is carried with earlier work and a grid independence test is also presented. Further studies on freezing and melting process were carried out. The results are presented in terms of pictorial representation of isotherms and liquid fraction

Keywords: heat exchanger, thermal energy storage, phase change material, CFD, latent heat

Procedia PDF Downloads 150
10105 Turkish Validation of the Nursing Outcomes for Urinary Incontinence and Their Sensitivities on Nursing Interventions

Authors: Dercan Gencbas, Hatice Bebis, Sue Moorhead

Abstract:

In the nursing process, many of the nursing classification systems were created to be used in international. From these, NANDA-I, Nursing Outcomes Classification (NOC) and Nursing Interventions Classification (NIC). In this direction, the main objective of this study is to establish a model for caregivers in hospitals and communities in Turkey and to ensure that nursing outputs are assessed by NOC-based measures. There are many scales to measure Urinary Incontinence (UI), which is very common in children, in old age, vaginal birth, NOC scales are ideal for use in the nursing process for comprehensive and holistic assessment, with surveys available. For this reason, the purpose of this study is to evaluate the validity of the NOC outputs and indicators used for UI NANDA-I. This research is a methodological study. In addition to the validity of scale indicators in the study, how much they will contribute to recovery after the nursing intervention was assessed by experts. Scope validations have been applied and calculated according to Fehring 1987 work model. According to this, nursing inclusion criteria and scores were determined. For example, if experts have at least four years of clinical experience, their score was 4 points or have at least one year of the nursing classification system, their score was 1 point. The experts were a publication experience about nursing classification, their score was 1 point, or have a doctoral degree in nursing, their score was 2 points. If the expert has a master degree, their score was 1 point. Total of 55 experts rated Fehring as a “senior degree” with a score of 90 according to the expert scoring. The nursing interventions to be applied were asked to what extent these indicators would contribute to recovery. For coverage validity tailored to Fehring's model, each NOC and NOC indicator from specialists was asked to score between 1-5. Score for the significance of indicators was from 1=no precaution to 5=very important. After the expert opinion, these weighted scores obtained for each NOC and NOC indicator were classified as 0.8 critical, 0.8 > 0.5 complements, > 0.5 are excluded. In the NANDA-I / NOC / NIC system (guideline), 5 NOCs proposed for nursing diagnoses for UI were proposed. These outputs are; Urinary Continence, Urinary Elimination, Tissue Integrity, Self CareToileting, Medication Response. After the scales are translated into Turkish, the weighted average of the scores obtained from specialists for the coverage of all 5 NOCs and the contribution of nursing initiatives exceeded 0.8. After the opinions of the experts, 79 of the 82 indicators were calculated as critical, 3 of the indicators were calculated as supplemental. Because of 0.5 > was not obtained, no substance was removed. All NOC outputs were identified as valid and usable scales in Turkey. In this study, five NOC outcomes were verified for the evaluation of the output of individuals who have received nursing knowledge of UI and variant types. Nurses in Turkey can benefit from the outputs of the NOC scale to perform the care of the elderly incontinence.

Keywords: nursing outcomes, content validity, nursing diagnosis, urinary incontinence

Procedia PDF Downloads 121
10104 Determine Causal Factors Affecting the Responsiveness and Productivity of Non-Governmental Universities

Authors: Davoud Maleki

Abstract:

Today, education and investment in human capital is a long-term investment without which the economy will be stagnant Stayed. Higher education represents a type of investment in human resources by providing and improving knowledge, skills and Attitudes help economic development. Providing efficient human resources by increasing the efficiency and productivity of people and on the other hand with Expanding the boundaries of knowledge and technology and promoting technology such as the responsibility of training human resources and increasing productivity and efficiency in High specialized levels are the responsibility of universities. Therefore, the university plays an infrastructural role in economic development and growth because education by creating skills and expertise in people and improving their ability.In recent decades, Iran's higher education system has been faced with many problems, therefore, scholars have looked for it is to identify and validate the causal factors affecting the responsiveness and productivity of non-governmental universities. The data in the qualitative part is the result of semi-structured interviews with 25 senior and middle managers working in the units It was Islamic Azad University of Tehran province, which was selected by theoretical sampling method. In data analysis, stepwise method and Analytical techniques of Strauss and Corbin (1992) were used. After determining the central category (answering for the sake of the beneficiaries) and using it in order to bring the categories, expressions and ideas that express the relationships between the main categories and In the end, six main categories were identified as causal factors affecting the university's responsiveness and productivity.They are: 1- Scientism 2- Human resources 3- Creating motivation in the university 4- Development based on needs assessment 5- Teaching process and Learning 6- University quality evaluation. In order to validate the response model obtained from the qualitative stage, a questionnaire The questionnaire was prepared and the answers of 146 students of Master's degree and Doctorate of Islamic Azad University located in Tehran province were received. Quantitative data in the form of descriptive data analysis, first and second stage factor analysis using SPSS and Amos23 software were analyzed. The findings of the research indicated the relationship between the central category and the causal factors affecting the response The results of the model test in the quantitative stage confirmed the generality of the conceptual model.

Keywords: accountability, productivity, non-governmental, universities, foundation data theory

Procedia PDF Downloads 52
10103 Modeling Search-And-Rescue Operations by Autonomous Mobile Robots at Sea

Authors: B. Kriheli, E. Levner, T. C. E. Cheng, C. T. Ng

Abstract:

During the last decades, research interest in planning, scheduling, and control of emergency response operations, especially people rescue and evacuation from the dangerous zone of marine accidents, has increased dramatically. Until the survivors (called ‘targets’) are found and saved, it may cause loss or damage whose extent depends on the location of the targets and the search duration. The problem is to efficiently search for and detect/rescue the targets as soon as possible with the help of intelligent mobile robots so as to maximize the number of saved people and/or minimize the search cost under restrictions on the amount of saved people within the allowable response time. We consider a special situation when the autonomous mobile robots (AMR), e.g., unmanned aerial vehicles and remote-controlled robo-ships have no operator on board as they are guided and completely controlled by on-board sensors and computer programs. We construct a mathematical model for the search process in an uncertain environment and provide a new fast algorithm for scheduling the activities of the autonomous robots during the search-and rescue missions after an accident at sea. We presume that in the unknown environments, the AMR’s search-and-rescue activity is subject to two types of error: (i) a 'false-negative' detection error where a target object is not discovered (‘overlooked') by the AMR’s sensors in spite that the AMR is in a close neighborhood of the latter and (ii) a 'false-positive' detection error, also known as ‘a false alarm’, in which a clean place or area is wrongly classified by the AMR’s sensors as a correct target. As the general resource-constrained discrete search problem is NP-hard, we restrict our study to finding local-optimal strategies. A specificity of the considered operational research problem in comparison with the traditional Kadane-De Groot-Stone search models is that in our model the probability of the successful search outcome depends not only on cost/time/probability parameters assigned to each individual location but, as well, on parameters characterizing the entire history of (unsuccessful) search before selecting any next location. We provide a fast approximation algorithm for finding the AMR route adopting a greedy search strategy in which, in each step, the on-board computer computes a current search effectiveness value for each location in the zone and sequentially searches for a location with the highest search effectiveness value. Extensive experiments with random and real-life data provide strong evidence in favor of the suggested operations research model and corresponding algorithm.

Keywords: disaster management, intelligent robots, scheduling algorithm, search-and-rescue at sea

Procedia PDF Downloads 166
10102 Biofuel Production via Thermal Cracking of Castor Methyl Ester

Authors: Roghaieh Parvizsedghy, Seyed Mojtaba Sadrameli

Abstract:

Diminishing oil reserves, deteriorating health standards because of greenhouse gas emissions and associated environmental impacts have emerged biofuel production. Vegetable oils are proved to be valuable feedstock in these growing industries as they are renewable and potentially inexhaustible sources. Thermal Cracking of vegetable oils (triglycerides) leads to production of biofuels which are similar to fossil fuels in terms of composition but their combustion and physical properties have limits. Acrolein (very poisonous gas) and water production during cracking of triglycerides occurs because of presence of glycerin in their molecular structure. Transesterification of vegetable oil is a method to extract glycerol from triglycerides structure and produce methyl ester. In this study, castor methyl ester was used for thermal cracking in order to survey the efficiency of this method to produce bio-gasoline and bio-diesel. Thus, several experiments were designed by means of central composite method. Statistical studies showed that two reaction parameters, namely cracking temperature and feed flowrate, affect products yield significantly. At the optimized conditions (480 °C and 29 g/h) for maximum bio-gasoline production, 88.6% bio-oil was achieved which was distilled and separated as bio-gasoline (28%) and bio-diesel (48.2%). Bio-gasoline exposed a high octane number and combustion heat. Distillation curve and Reid vapor pressure of bio-gasoline fell in the criteria of standard gasoline (class AA) by ASTM D4814. Bio-diesel was compatible with standard diesel by ASTM D975. Water production was negligible and no evidence of acrolein production was distinguished. Therefore, thermal cracking of castor methyl ester could be used as a method to produce valuable biofuels.

Keywords: bio-diesel, bio-gasoline, castor methyl ester, thermal cracking, transesterification

Procedia PDF Downloads 236
10101 Influence of Infinite Elements in Vibration Analysis of High-Speed Railway Track

Authors: Janaki Rama Raju Patchamatla, Emani Pavan Kumar

Abstract:

The idea of increasing the existing train speeds and introduction of the high-speed trains in India as a part of Vision-2020 is really challenging from both economic viability and technical feasibility. More than economic viability, technical feasibility has to be thoroughly checked for safe operation and execution. Trains moving at high speeds need a well-established firm and safe track thoroughly tested against vibration effects. With increased speeds of trains, the track structure and layered soil-structure interaction have to be critically assessed for vibration and displacements. Physical establishment of track, testing and experimentation is a costly and time taking process. Software-based modelling and simulation give relatively reliable, cost-effective means of testing effects of critical parameters like sleeper design and density, properties of track and sub-grade, etc. The present paper reports the applicability of infinite elements in reducing the unrealistic stress-wave reflections from so-called soil-structure interface. The influence of the infinite elements is quantified in terms of the displacement time histories of adjoining soil and the deformation pattern in general. In addition, the railhead response histories at various locations show that the numerical model is realistic without any aberrations at the boundaries. The numerical model is quite promising in its ability to simulate the critical parameters of track design.

Keywords: high speed railway track, finite element method, Infinite elements, vibration analysis, soil-structure interface

Procedia PDF Downloads 266