Search results for: component prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4632

Search results for: component prediction

1422 The Factors Affecting on Promoting Productivity from Nurses' View

Authors: Mahnaz Sanjari, Sedigheh Salemi, Mohammad Mirzabeigi

Abstract:

Nowadays, the world is facing a crisis of workforce and one of the most striking examples is the shortage of nurses. Nursing workforce productivity is related by various factors such as absenteeism, professional effectiveness and quality care. This cross-sectional study was conducted in 700 nurses who work in government hospitals from 35 hospitals of 9 provinces in Iran. The study was approved by the Nursing Council and was carried out with the authorization of the Research Ethics Committee. The questionnaire included 33 questions and 4 sub categories such as human resource, education and management. The reliability was evaluated by Cronbach's alpha (α=0/85). Statistical analyzes were performed, using SPSS version 16. The result showed that nurses emphasized on "respect to nurse-to-bed ratio" and less importance item was "using less experienced nurse". In addition, another important factor in clinical productivity is "Proper physical structure and amenities","good communication with colleagues" and "having good facilities". Also, "human resources at all levels of standard", "promoting on merit" and "well defined relationship in health system" are another important factors in productivity from nurse` view. The main managerial factor is "justice between employees" and the main educational component of productivity is “updating nursing knowledge”. The results show that more than half of the participants emphasized on the management and educational factors. Productivity as one of the main part of the health care quality leads to appropriate use of human and organizational resources, reduce cost services, and organizational development.

Keywords: productivity, nursing services, workforce, cost services

Procedia PDF Downloads 341
1421 An Integrated Approach for Optimal Selection of Machining Parameters in Laser Micro-Machining Process

Authors: A. Gopala Krishna, M. Lakshmi Chaitanya, V. Kalyana Manohar

Abstract:

In the existent analysis, laser micro machining (LMM) of Silicon carbide (SiCp) reinforced Aluminum 7075 Metal Matrix Composite (Al7075/SiCp MMC) was studied. While machining, Because of the intense heat generated, A layer gets formed on the work piece surface which is called recast layer and this layer is detrimental to the surface quality of the component. The recast layer needs to be as small as possible for precise applications. Therefore, The height of recast layer and the depth of groove which are conflicting in nature were considered as the significant manufacturing criteria, Which determines the pursuit of a machining process obtained in LMM of Al7075/10%SiCp composite. The present work formulates the depth of groove and height of recast layer in relation to the machining parameters using the Response Surface Methodology (RSM) and correspondingly, The formulated mathematical models were put to use for optimization. Since the effect of machining parameters on the depth of groove and height of recast layer was contradictory, The problem was explicated as a multi objective optimization problem. Moreover, An evolutionary Non-dominated sorting genetic algorithm (NSGA-II) was employed to optimize the model established by RSM. Subsequently this algorithm was also adapted to achieve the Pareto optimal set of solutions that provide a detailed illustration for making the optimal solutions. Eventually experiments were conducted to affirm the results obtained from RSM and NSGA-II.

Keywords: Laser Micro Machining (LMM), depth of groove, Height of recast layer, Response Surface Methodology (RSM), non-dominated sorting genetic algorithm

Procedia PDF Downloads 341
1420 An Enhanced Approach in Validating Analytical Methods Using Tolerance-Based Design of Experiments (DoE)

Authors: Gule Teri

Abstract:

The effective validation of analytical methods forms a crucial component of pharmaceutical manufacturing. However, traditional validation techniques can occasionally fail to fully account for inherent variations within datasets, which may result in inconsistent outcomes. This deficiency in validation accuracy is particularly noticeable when quantifying low concentrations of active pharmaceutical ingredients (APIs), excipients, or impurities, introducing a risk to the reliability of the results and, subsequently, the safety and effectiveness of the pharmaceutical products. In response to this challenge, we introduce an enhanced, tolerance-based Design of Experiments (DoE) approach for the validation of analytical methods. This approach distinctly measures variability with reference to tolerance or design margins, enhancing the precision and trustworthiness of the results. This method provides a systematic, statistically grounded validation technique that improves the truthfulness of results. It offers an essential tool for industry professionals aiming to guarantee the accuracy of their measurements, particularly for low-concentration components. By incorporating this innovative method, pharmaceutical manufacturers can substantially advance their validation processes, subsequently improving the overall quality and safety of their products. This paper delves deeper into the development, application, and advantages of this tolerance-based DoE approach and demonstrates its effectiveness using High-Performance Liquid Chromatography (HPLC) data for verification. This paper also discusses the potential implications and future applications of this method in enhancing pharmaceutical manufacturing practices and outcomes.

Keywords: tolerance-based design, design of experiments, analytical method validation, quality control, biopharmaceutical manufacturing

Procedia PDF Downloads 75
1419 Valorisation of Waste Chicken Feathers: Electrospun Antibacterial Nanoparticles-Embedded Keratin Composite Nanofibers

Authors: Lebogang L. R. Mphahlele, Bruce B. Sithole

Abstract:

Chicken meat is the highest consumed meat in south Africa, with a per capita consumption of >33 kg yearly. Hence, South Africa produces over 250 million kg of waste chicken feathers each year, the majority of which is landfilled or incinerated. The discarded feathers have caused environmental pollution and natural protein resource waste. Therefore, the valorisation of waste chicken feathers is measured as a more environmentally friendly and cost-effective treatment. Feather contains 91% protein, the main component being beta-keratin, a fibrous and insoluble structural protein extensively cross linked by disulfide bonds. Keratin is usually converted it into nanofibers via electrospinning for a variety of applications. keratin nanofiber composites have many potential biomedical applications for their attractive features, such as high surface-to-volume ratio and very high porosity. The application of nanofibers in the biomedical wound dressing requires antimicrobial properties for materials. One approach is incorporating inorganic nanoparticles, among which silver nanoparticles played an important alternative antibacterial agent and have been studied against many types of microbes. The objective of this study is to combine synthetic polymer, chicken feather keratin, and antibacterial nanoparticles to develop novel electrospun antibacterial nanofibrous composites for possible wound dressing application. Furthermore, this study will converting a two-dimensional electrospun nanofiber membrane to three-dimensional fiber networks that resemble the structure of the extracellular matrix (ECM)

Keywords: chicken feather keratin, nanofibers, nanoparticles, nanocomposites, wound dressing

Procedia PDF Downloads 123
1418 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa

Authors: Samy A. Khalil, U. Ali Rahoma

Abstract:

The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.

Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa

Procedia PDF Downloads 93
1417 Rayleigh-Bénard-Taylor Convection of Newtonian Nanoliquid

Authors: P. G. Siddheshwar, T. N. Sakshath

Abstract:

In the paper we make linear and non-linear stability analyses of Rayleigh-Bénard convection of a Newtonian nanoliquid in a rotating medium (called as Rayleigh-Bénard-Taylor convection). Rigid-rigid isothermal boundaries are considered for investigation. Khanafer-Vafai-Lightstone single phase model is used for studying instabilities in nanoliquids. Various thermophysical properties of nanoliquid are obtained using phenomenological laws and mixture theory. The eigen boundary value problem is solved for the Rayleigh number using an analytical method by considering trigonometric eigen functions. We observe that the critical nanoliquid Rayleigh number is less than that of the base liquid. Thus the onset of convection is advanced due to the addition of nanoparticles. So, increase in volume fraction leads to advanced onset and thereby increase in heat transport. The amplitudes of convective modes required for estimating the heat transport are determined analytically. The tri-modal standard Lorenz model is derived for the steady state assuming small scale convective motions. The effect of rotation on the onset of convection and on heat transport is investigated and depicted graphically. It is observed that the onset of convection is delayed due to rotation and hence leads to decrease in heat transport. Hence, rotation has a stabilizing effect on the system. This is due to the fact that the energy of the system is used to create the component V. We observe that the amount of heat transport is less in the case of rigid-rigid isothermal boundaries compared to free-free isothermal boundaries.

Keywords: nanoliquid, rigid-rigid, rotation, single phase

Procedia PDF Downloads 226
1416 Analysis of the Effective Components on the Performance of the Public Sector in Iran

Authors: Mahsa Habibzadeh

Abstract:

The function is defined as the process of systematic and systematic measurement of the components of how each task is performed and determining their potential for improvement in accordance with the specific standards of each component. Hence, evaluation is the basis for the improvement of organizations' functional excellence and the move towards performance excellence depends on performance improvement planning. Because of the past two decades, the public sector system has undergone dramatic changes. The purpose of such developments is often to overcome the barriers of the bureaucratic system, which impedes the efficient use of limited resources. Implementing widespread changes in the public sector of developed and even developing countries has led the process of developments to be addressed by many researchers. In this regard, the present paper has been carried out with the approach of analyzing the components that affect the performance of the public sector in Iran. To achieve this goal, indicators that affect the performance of the public sector and the factors affecting the improvement of its accountability have been identified. The research method in this research is descriptive and analytical. A statistical population of 120 people consists of managers and employees of the public sector in Iran. The questionnaires were distributed among them and analyzed using SPSS and LISREL software. The obtained results indicate that the results of the research findings show that between responsibilities there is a significant relationship between participation of managers and employees, legality, justice and transparency of specialty and competency, participation in public sector functions. Also, the significant coefficient for the liability variable is 3.31 for justice 2.89 for transparency 1.40 for legality of 2.27 for specialty and competence 2.13 and 5.17 for participation 5.17. Implementing indicators that affect the performance of the public sector can lead to satisfaction of the audience.

Keywords: performance, accountability system, public sector, components

Procedia PDF Downloads 224
1415 Right Solution of Geodesic Equation in Schwarzschild Metric and Overall Examination of Physical Laws

Authors: Kwan U. Kim, Jin Sim, Ryong Jin Jang, Sung Duk Kim

Abstract:

108 years have passed since a great number of physicists explained astronomical and physical phenomena by solving geodesic equations in Schwarzschild metric. However, when solving the geodesic equations in Schwarzschild metric, they did not correctly solve one branch of the component of space among spatial and temporal components of four-dimensional force and did not come up with physical laws correctly by means of physical analysis from the results obtained by solving the geodesic equations. In addition to it, they did not treat the astronomical and physical phenomena in a physical way based on the correct physical laws obtained from the solution of the geodesic equations in Schwarzschild metric. Therefore, some former scholars mentioned that Einstein’s theoretical basis of the general theory of relativity was obscure and incorrect, but they have not given a correct physical solution to the problems. Furthermore, since the general theory of relativity has not given a quantitative solution to obscure and incorrect problems, the generalization of gravitational theory has not been successfully completed yet, although the former scholars thought of it and tried to do it. In order to solve the problems it is necessary to explore the obscure and incorrect problems in general theory of relativity based on the physical laws and to find out the methodology of solving the problems. Therefore, first of all, as the first step for achieving the purpose, the right solution of the geodesic equation in Schwarzschild metric has been presented. Next, the correct physical laws found by making a physical analysis of the results have been presented, the obscure and incorrect problems have been shown, and an analysis of them has been made based on the physical laws. In addition, the experimental verification of the physical laws found by us has been made.

Keywords: equivalence principle, general relativity, geometrodynamics, Schwarzschild, Poincaré

Procedia PDF Downloads 69
1414 Design Optimization of Miniature Mechanical Drive Systems Using Tolerance Analysis Approach

Authors: Eric Mxolisi Mkhondo

Abstract:

Geometrical deviations and interaction of mechanical parts influences the performance of miniature systems.These deviations tend to cause costly problems during assembly due to imperfections of components, which are invisible to a naked eye.They also tend to cause unsatisfactory performance during operation due to deformation cause by environmental conditions.One of the effective tools to manage the deviations and interaction of parts in the system is tolerance analysis.This is a quantitative tool for predicting the tolerance variations which are defined during the design process.Traditional tolerance analysis assumes that the assembly is static and the deviations come from the manufacturing discrepancies, overlooking the functionality of the whole system and deformation of parts due to effect of environmental conditions. This paper presents an integrated tolerance analysis approach for miniature system in operation.In this approach, a computer-aided design (CAD) model is developed from system’s specification.The CAD model is then used to specify the geometrical and dimensional tolerance limits (upper and lower limits) that vary component’s geometries and sizes while conforming to functional requirements.Worst-case tolerances are analyzed to determine the influenced of dimensional changes due to effects of operating temperatures.The method is used to evaluate the nominal conditions, and worse case conditions in maximum and minimum dimensions of assembled components.These three conditions will be evaluated under specific operating temperatures (-40°C,-18°C, 4°C, 26°C, 48°C, and 70°C). A case study on the mechanism of a zoom lens system is used to illustrate the effectiveness of the methodology.

Keywords: geometric dimensioning, tolerance analysis, worst-case analysis, zoom lens mechanism

Procedia PDF Downloads 162
1413 Identification of Superior Cowpea Mutant Genotypes, Their Adaptability, and Stability Under South African Conditions

Authors: M. Ntswane, N. Mbuma, M. Labuschagne, A. Mofokeng, M. Rantso

Abstract:

Cowpea is an essential legume for the nutrition and health of millions of people in different regions. The production and productivity of the crop are very limited in South Africa due to a lack of adapted and stable genotypes. The improvement of nutritional quality is made possible by manipulating the genes of diverse cowpea genotypes available around the world. Assessing the adaptability and stability of the cowpea mutant genotypes for yield and nutritional quality requires examining them in different environments. The objective of the study was to determine the adaptability and stability of cowpea mutant genotypes under South African conditions and to identify the superior genotypes that combine grain yield components, antioxidants, and nutritional quality. Thirty-one cowpea genotypes were obtained from the Agricultural Research Council grain crops (ARC-GC) and were planted in Glen, Mafikeng, Polokwane, Potchefstroom, Taung, and Vaalharts during the 2021/22 summer cropping season. Significant genotype by location interactions indicated the possibility of genetic improvement of these traits. The genotype plus genotype by environment indicated broad adaptability and stability of mutant genotypes. The principal component analysis identified the association of the genotypes with the traits. Phenotypic correlation analysis showed that Zn and protein content were significant and positively correlated and suggested the possibility of indirect selection of these traits. Results from this study could be used to help plant breeders in making informed decisions and developing nutritionally improved cowpea genotypes with the aim of addressing the challenges of poor nutritional quality.

Keywords: cowpea seeds, adaptability, stability, mineral elements, protein content

Procedia PDF Downloads 102
1412 Early Gastric Cancer Prediction from Diet and Epidemiological Data Using Machine Learning in Mizoram Population

Authors: Brindha Senthil Kumar, Payel Chakraborty, Senthil Kumar Nachimuthu, Arindam Maitra, Prem Nath

Abstract:

Gastric cancer is predominantly caused by demographic and diet factors as compared to other cancer types. The aim of the study is to predict Early Gastric Cancer (ECG) from diet and lifestyle factors using supervised machine learning algorithms. For this study, 160 healthy individual and 80 cases were selected who had been followed for 3 years (2016-2019), at Civil Hospital, Aizawl, Mizoram. A dataset containing 11 features that are core risk factors for the gastric cancer were extracted. Supervised machine algorithms: Logistic Regression, Naive Bayes, Support Vector Machine (SVM), Multilayer perceptron, and Random Forest were used to analyze the dataset using Python Jupyter Notebook Version 3. The obtained classified results had been evaluated using metrics parameters: minimum_false_positives, brier_score, accuracy, precision, recall, F1_score, and Receiver Operating Characteristics (ROC) curve. Data analysis results showed Naive Bayes - 88, 0.11; Random Forest - 83, 0.16; SVM - 77, 0.22; Logistic Regression - 75, 0.25 and Multilayer perceptron - 72, 0.27 with respect to accuracy and brier_score in percent. Naive Bayes algorithm out performs with very low false positive rates as well as brier_score and good accuracy. Naive Bayes algorithm classification results in predicting ECG showed very satisfactory results using only diet cum lifestyle factors which will be very helpful for the physicians to educate the patients and public, thereby mortality of gastric cancer can be reduced/avoided with this knowledge mining work.

Keywords: Early Gastric cancer, Machine Learning, Diet, Lifestyle Characteristics

Procedia PDF Downloads 157
1411 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling

Procedia PDF Downloads 115
1410 Government Final Consumption Expenditure Financial Deepening and Household Consumption Expenditure NPISHs in Nigeria

Authors: Usman A. Usman

Abstract:

Undeniably, unlike the Classical side, the Keynesian perspective of the aggregate demand side indeed has a significant position in the policy, growth, and welfare of Nigeria due to government involvement and ineffective demand of the population living with poor per capita income. This study seeks to investigate the effect of Government Final Consumption Expenditure, Financial Deepening on Households, and NPISHs Final consumption expenditure using data on Nigeria from 1981 to 2019. This study employed the ADF stationarity test, Johansen Cointegration test, and Vector Error Correction Model. The results of the study revealed that the coefficient of Government final consumption expenditure has a positive effect on household consumption expenditure in the long run. There is a long-run and short-run relationship between gross fixed capital formation and household consumption expenditure. The coefficients cpsgdp financial deepening and gross fixed capital formation posit a negative impact on household final consumption expenditure. The coefficients money supply lm2gdp, which is another proxy for financial deepening, and the coefficient FDI have a positive effect on household final consumption expenditure in the long run. Therefore, this study recommends that Gross fixed capital formation stimulates household consumption expenditure; a legal framework to support investment is a panacea to increasing hoodmold income and consumption and reducing poverty in Nigeria. Therefore, this should be a key central component of policy.

Keywords: household, government expenditures, vector error correction model, johansen test

Procedia PDF Downloads 50
1409 Authentic Visual Resources for the Foreign Language Classroom

Authors: O. Yeret

Abstract:

Visual resources are all around us, especially in today's media-driven world, which gravitates, more and more, towards the visual. As a result, authentic resources, such as television advertisements, become testaments – authentic cultural materials – that reflect the landscape of certain groups and communities during a specific point in time. Engaging language students with popular advertisements can provide a great opportunity for developing cultural awareness, a component that is sometimes overlooked in the foreign language classroom. This paper will showcase practical examples of using Israeli Television Ads in various Modern Hebrew language courses. Several approaches for combining the study of language and culture, through the use of advertisements, will be included; for example, targeted assignments based on students' proficiency levels, such as: asking to recognize vocabulary words and answer basic information questions, as opposed to commenting on the significance of an ad and analyzing its particular cultural elements. The use of visual resources in the language classroom does not only enable students to learn more about the culture of the target language, but also to combine their language skills. Most often, interacting with an ad requires close listening and some reading (through captions or other data). As students analyze the ad, they employ their writing and speaking skills by answering questions in text or audio form. Hence, these interactions are able to elicit complex language use across the four domains: listening, speaking, writing, and reading. This paper will include examples of practical assignments that were developed for several Modern Hebrew language courses, together with the specific advertisements and questions related to them. Conclusions from the process and recent feedback notes received from students regarding the use of visual resources will be mentioned as well.

Keywords: authentic materials, cultural awareness, second language acquisition, visual resources

Procedia PDF Downloads 105
1408 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback

Authors: Takuro Kida, Yuichi Kida

Abstract:

In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.

Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization

Procedia PDF Downloads 135
1407 Quantification of Hydrogen Sulfide and Methyl Mercaptan in Air Samples from a Waste Management Facilities

Authors: R. F. Vieira, S. A. Figueiredo, O. M. Freitas, V. F. Domingues, C. Delerue-Matos

Abstract:

The presence of sulphur compounds like hydrogen sulphide and mercaptans is one of the reasons for waste-water treatment and waste management being associated with odour emissions. In this context having a quantifying method for these compounds helps in the optimization of treatment with the goal of their elimination, namely biofiltration processes. The aim of this study was the development of a method for quantification of odorous gases in waste treatment plants air samples. A method based on head space solid phase microextraction (HS-SPME) coupled with gas chromatography - flame photometric detector (GC-FPD) was used to analyse H2S and Metil Mercaptan (MM). The extraction was carried out with a 75-μm Carboxen-polydimethylsiloxane fiber coating at 22 ºC for 20 min, and analysed by a GC 2010 Plus A from Shimadzu with a sulphur filter detector: splitless mode (0.3 min), the column temperature program was from 60 ºC, increased by 15 ºC/min to 100 ºC (2 min). The injector temperature was held at 250 ºC, and the detector at 260 ºC. For calibration curve a gas diluter equipment (digital Hovagas G2 - Multi Component Gas Mixer) was used to do the standards. This unit had two input connections, one for a stream of the dilute gas and another for a stream of nitrogen and an output connected to a glass bulb. A 40 ppm H2S and a 50 ppm MM cylinders were used. The equipment was programmed to the selected concentration, and it automatically carried out the dilution to the glass bulb. The mixture was left flowing through the glass bulb for 5 min and then the extremities were closed. This method allowed the calibration between 1-20 ppm for H2S and 0.02-0.1 ppm and 1-3.5 ppm for MM. Several quantifications of air samples from inlet and outlet of a biofilter operating in a waste management facility in the north of Portugal allowed the evaluation the biofilters performance.

Keywords: biofiltration, hydrogen sulphide, mercaptans, quantification

Procedia PDF Downloads 470
1406 Hyaluronan and Hyaluronan-Associated Genes in Human CD8 T Cells

Authors: Emily Schlebes, Christian Hundhausen, Jens W. Fischer

Abstract:

The glycosaminoglycan hyaluronan (HA) is a major component of the extracellular matrix, typically produced by fibroblasts of the connective tissue but also by immune cells. Here, we investigated the capacity of human peripheral blood CD8 T cells from healthy donors to produce HA and to express HA receptors as well as HA degrading enzymes. Further, we evaluated the effect of pharmacological HA inhibition on CD8 T cell function. Using immunocytochemistry together with quantitative PCR analysis, we found that HA synthesis is rapidly induced upon antibody-induced T cell receptor (TCR) activation and almost exclusively mediated by HA synthase 3 (HAS3). TCR activation also resulted in the upregulation of HA receptors CD44, hyaluronan-mediated motility receptor (HMMR), and layilin (LAYN), although kinetics and strength of expression varied greatly between subjects. The HA-degrading enzymes HYAL1 and HYAL2 were detected at low levels and induced by cell activation in some individuals. Interestingly, expression of HAS3, HA receptors, and hyaluronidases were modulated by the proinflammatory cytokines IL-6 and IL-1bβ in most subjects. To assess the functional role of HA in CD8 T cells, we performed carboxyfluorescein succinimidyl ester (CFSE) based proliferation assays and cytokine analysis in the presence of the HA inhibitor 4- Methylumbelliferone (4-MU). Despite significant inter-individual variation with regard to the effective dose, 4-MU resulted in the inhibition of CD8 T cell proliferation and reduced release of TNF-α and IFN-γ. Collectively, these data demonstrate that human CD8 T cells respond to TCR stimulation with a synthesis of HA and expression of HA-related genes. They further suggest that HA inhibition may be helpful in interfering with pathogenic T cell activation in human disease.

Keywords: CD8 T cells, extracellular matrix, hyaluronan, hyaluronan synthase 3

Procedia PDF Downloads 94
1405 Avoiding Gas Hydrate Problems in Qatar Oil and Gas Industry: Environmentally Friendly Solvents for Gas Hydrate Inhibition

Authors: Nabila Mohamed, Santiago Aparicio, Bahman Tohidi, Mert Atilhan

Abstract:

Qatar's one of the biggest problem in processing its natural resource, which is natural gas, is the often occurring blockage in the pipelines caused due to uncontrolled gas hydrate formation in the pipelines. Several millions of dollars are being spent at the process site to dehydrate the blockage safely by using chemical inhibitors. We aim to establish national database, which addresses the physical conditions that promotes Qatari natural gas to form gas hydrates in the pipelines. Moreover, we aim to design and test novel hydrate inhibitors that are suitable for Qatari natural gas and its processing facilities. From these perspectives we are aiming to provide more effective and sustainable reservoir utilization and processing of Qatari natural gas. In this work, we present the initial findings of a QNRF funded project, which deals with the natural gas hydrate formation characteristics of Qatari type gas in both experimental (PVTx) and computational (molecular simulations) methods. We present the data from the two fully automated apparatus: a gas hydrate autoclave and a rocking cell. Hydrate equilibrium curves including growth/dissociation conditions for multi-component systems for several gas mixtures that represent Qatari type natural gas with and without the presence of well known kinetic and thermodynamic hydrate inhibitors. Ionic liquids were designed and used for testing their inhibition performance and their DFT and molecular modeling simulation results were also obtained and compared with the experimental results. Results showed significant performance of ionic liquids with up to 0.5 % in volume with up to 2 to 4 0C inhibition at high pressures.

Keywords: gas hydrates, natural gas, ionic liquids, inhibition, thermodynamic inhibitors, kinetic inhibitors

Procedia PDF Downloads 1312
1404 Assessing Functional Structure in European Marine Ecosystems Using a Vector-Autoregressive Spatio-Temporal Model

Authors: Katyana A. Vert-Pre, James T. Thorson, Thomas Trancart, Eric Feunteun

Abstract:

In marine ecosystems, spatial and temporal species structure is an important component of ecosystems’ response to anthropological and environmental factors. Although spatial distribution patterns and fish temporal series of abundance have been studied in the past, little research has been allocated to the joint dynamic spatio-temporal functional patterns in marine ecosystems and their use in multispecies management and conservation. Each species represents a function to the ecosystem, and the distribution of these species might not be random. A heterogeneous functional distribution will lead to a more resilient ecosystem to external factors. Applying a Vector-Autoregressive Spatio-Temporal (VAST) model for count data, we estimate the spatio-temporal distribution, shift in time, and abundance of 140 species of the Eastern English Chanel, Bay of Biscay and Mediterranean Sea. From the model outputs, we determined spatio-temporal clusters, calculating p-values for hierarchical clustering via multiscale bootstrap resampling. Then, we designed a functional map given the defined cluster. We found that the species distribution within the ecosystem was not random. Indeed, species evolved in space and time in clusters. Moreover, these clusters remained similar over time deriving from the fact that species of a same cluster often shifted in sync, keeping the overall structure of the ecosystem similar overtime. Knowing the co-existing species within these clusters could help with predicting data-poor species distribution and abundance. Further analysis is being performed to assess the ecological functions represented in each cluster.

Keywords: cluster distribution shift, European marine ecosystems, functional distribution, spatio-temporal model

Procedia PDF Downloads 188
1403 Predicting Foreign Direct Investment of IC Design Firms from Taiwan to East and South China Using Lotka-Volterra Model

Authors: Bi-Huei Tsai

Abstract:

This work explores the inter-region investment behaviors of integrated circuit (IC) design industry from Taiwan to China using the amount of foreign direct investment (FDI). According to the mutual dependence among different IC design industrial locations, Lotka-Volterra model is utilized to explore the FDI interactions between South and East China. Effects of inter-regional collaborations on FDI flows into China are considered. Evolutions of FDIs into South China for IC design industry significantly inspire the subsequent FDIs into East China, while FDIs into East China for Taiwan’s IC design industry significantly hinder the subsequent FDIs into South China. The supply chain along IC industry includes IC design, manufacturing, packing and testing enterprises. I C manufacturing, packaging and testing industries depend on IC design industry to gain advanced business benefits. The FDI amount from Taiwan’s IC design industry into East China is the greatest among the four regions: North, East, Mid-West and South China. The FDI amount from Taiwan’s IC design industry into South China is the second largest. If IC design houses buy more equipment and bring more capitals in South China, those in East China will have pressure to undertake more FDIs into East China to maintain the leading position advantages of the supply chain in East China. On the other hand, as the FDIs in East China rise, the FDIs in South China will successively decline since capitals have concentrated in East China. Prediction of Lotka-Volterra model in FDI trends is accurate because the industrial interactions between the two regions are included. Finally, this work confirms that the FDI flows cannot reach a stable equilibrium point, so the FDI inflows into East and South China will expand in the future.

Keywords: Lotka-Volterra model, foreign direct investment, competitive, Equilibrium analysis

Procedia PDF Downloads 355
1402 Frequency of Alloimmunization in Sickle Cell Disease Patients in Africa: A Systematic Review with Meta-analysis

Authors: Theresa Ukamaka Nwagha, Angela Ogechukwu Ugwu, Martins Nweke

Abstract:

Background and Objectives: Blood transfusion is an effective and proven treatment for some severe complications of sickle cell disease. Recurrent transfusions have put patients with sickle cell disease at risk of developing antibodies against the various antigens they were exposed to. This study aims to investigate the frequency of red blood cell alloimmunization in patients with sickle disease in Africa. Materials and Methods: This is a systematic review of peer-reviewed literature published in English. The review was conducted consistent with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses checklist. Data sources for the review include MEDLINE, PubMed, CINAHL, and Academic Search Complete. Included in this review are articles that reported the frequency/prevalence of red blood cell alloimmunization in sickle cell disease patients in Africa. Eligible studies were subjected to independent full-text screening and data extraction. Risk of bias assessment was conducted with the aid of the mixed method appraisal tool. We employed a random-effects model of meta-analysis to estimate the pooled prevalence. We computed Cochrane’s Q statistics and I2 and prediction interval to quantify heterogeneity in effect size. Results: The prevalence estimates range from 2.6% to 29%. Pooled prevalence was estimated to be 10.4% (CI 7.7.–13.8); PI = 3.0 – 34.0%), with significant heterogeneity (I2 = 84.62; PI = 2.0-32.0%) and publication bias (Egger’s t-test = 1.744, p = 0.0965). Conclusion: The frequency of red cell alloantibody varies considerably in Africa. The alloantibodies appeared frequent in this order: the Rhesus, Kell, Lewis, Duffy, MNS, and Lutheran

Keywords: frequency, red blood cell, alloimmunization, sickle cell disease, Africa

Procedia PDF Downloads 92
1401 Microstructure and Hardness Changes on T91 Weld Joint after Heating at 560°C

Authors: Suraya Mohamad Nadzir, Badrol Ahmad, Norlia Berahim

Abstract:

T91 steel has been used as construction material for superheater tubes in sub-critical and super critical boiler. This steel was developed with higher creep strength property as compared to conventional low alloy steel. However, this steel is also susceptible to materials degradation due to its sensitivity to heat treatment especially Post Weld Heat Treatment (PWHT) after weld repair process. Review of PWHT process shows that the holding temperature may different from one batch to other batch of samples depending on the material composition. This issue was reviewed by many researchers and one of the potential solutions is the development of weld repair process without PWHT. This process is possible with the use of temper bead welding technique. However, study has shown the hardness value across the weld joint with exception of PWHT is much higher compare to recommended hardness value. Based on the above findings, a study to evaluate the microstructure and hardness changes of T91 weld joint after heating at 560°C at varying duration was carried out. This study was carried out to evaluate the possibility of self-tempering process during in-service period. In this study, the T91 weld joint was heat-up in air furnace at 560°C for duration of 50 and 150 hours. The heating process was controlled with heating rate of 200°C/hours, and cooling rate about 100°C/hours. Following this process, samples were prepared for the microstructure examination and hardness evaluation. Results have shown full tempered martensite structure and acceptance hardness value was achieved after 50 hours heating. This result shows that the thin component such as T91 superheater tubes is able to self-tempering during service hour.

Keywords: T91, weld-joint, tempered martensite, self-tempering

Procedia PDF Downloads 372
1400 Computer Aided Diagnosis Bringing Changes in Breast Cancer Detection

Authors: Devadrita Dey Sarkar

Abstract:

Regardless of the many technologic advances in the past decade, increased training and experience, and the obvious benefits of uniform standards, the false-negative rate in screening mammography remains unacceptably high .A computer aided neural network classification of regions of suspicion (ROS) on digitized mammograms is presented in this abstract which employs features extracted by a new technique based on independent component analysis. CAD is a concept established by taking into account equally the roles of physicians and computers, whereas automated computer diagnosis is a concept based on computer algorithms only. With CAD, the performance by computers does not have to be comparable to or better than that by physicians, but needs to be complementary to that by physicians. In fact, a large number of CAD systems have been employed for assisting physicians in the early detection of breast cancers on mammograms. A CAD scheme that makes use of lateral breast images has the potential to improve the overall performance in the detection of breast lumps. Because breast lumps can be detected reliably by computer on lateral breast mammographs, radiologists’ accuracy in the detection of breast lumps would be improved by the use of CAD, and thus early diagnosis of breast cancer would become possible. In the future, many CAD schemes could be assembled as packages and implemented as a part of PACS. For example, the package for breast CAD may include the computerized detection of breast nodules, as well as the computerized classification of benign and malignant nodules. In order to assist in the differential diagnosis, it would be possible to search for and retrieve images (or lesions) with these CAD systems, which would be reliable and useful method for quantifying the similarity of a pair of images for visual comparison by radiologists.

Keywords: CAD(computer-aided design), lesions, neural network, ROS(region of suspicion)

Procedia PDF Downloads 453
1399 Urban Change Detection and Pattern Analysis Using Satellite Data

Authors: Shivani Jha, Klaus Baier, Rafiq Azzam, Ramakar Jha

Abstract:

In India, generally people migrate from rural area to the urban area for better infra-structural facilities, high standard of living, good job opportunities and advanced transport/communication availability. In fact, unplanned urban development due to migration of people causes seriou damage to the land use, water pollution and available water resources. In the present work, an attempt has been made to use satellite data of different years for urban change detection of Chennai metropolitan city along with pattern analysis to generate future scenario of urban development using buffer zoning in GIS environment. In the analysis, SRTM (30m) elevation data and IRS-1C satellite data for the years 1990, 2000, and 2014, are used. The flow accumulation, aspect, flow direction and slope maps developed using SRTM 30 m data are very useful for finding suitable urban locations for industrial setup and urban settlements. Normalized difference vegetation index (NDVI) and Principal Component Analysis (PCA) have been used in ERDAS imagine software for change detection in land use of Chennai metropolitan city. It has been observed that the urban area has increased exponentially in Chennai metropolitan city with significant decrease in agriculture and barren lands. However, the water bodies located in the study regions are protected and being used as freshwater for drinking purposes. Using buffer zone analysis in GIS environment, it has been observed that the development has taken place in south west direction significantly and will do so in future.

Keywords: urban change, satellite data, the Chennai metropolis, change detection

Procedia PDF Downloads 399
1398 Impact of Economic Globalization on Ecological Footprint in India: Evidenced with Dynamic ARDL Simulations

Authors: Muhammed Ashiq Villanthenkodath, Shreya Pal

Abstract:

Purpose: This study scrutinizes the impact of economic globalization on ecological footprint while endogenizing economic growth and energy consumption from 1990 to 2018 in India. Design/methodology/approach: The standard unit root test has been employed for time series analysis to unveil the integration order. Then, the cointegration was confirmed using autoregressive distributed lag (ARDL) analysis. Further, the study executed the dynamic ARDL simulation model to estimate long-run and short-run results along with simulation and robotic prediction. Findings: The cointegration analysis confirms the existence of a long-run association among variables. Further, economic globalization reduces the ecological footprint in the long run. Similarly, energy consumption decreases the ecological footprint. In contrast, economic growth spurs the ecological footprint in India. Originality/value: This study contributes to the literature in many ways. First, unlike studies that employ CO2 emissions and globalization nexus, this study employs ecological footprint for measuring environmental quality; since it is the broader measure of environmental quality, it can offer a wide range of climate change mitigation policies for India. Second, the study executes a multivariate framework with updated series from 1990 to 2018 in India to explore the link between EF, economic globalization, energy consumption, and economic growth. Third, the dynamic autoregressive distributed lag (ARDL) model has been used to explore the short and long-run association between the series. Finally, to our limited knowledge, this is the first study that uses economic globalization in the EF function of India amid facing a trade-off between sustainable economic growth and the environment in the era of globalization.

Keywords: economic globalization, ecological footprint, India, dynamic ARDL simulation model

Procedia PDF Downloads 119
1397 Implementing a Strategy of Reliability Centred Maintenance (RCM) in the Libyan Cement Industry

Authors: Khalid M. Albarkoly, Kenneth S. Park

Abstract:

The substantial development of the construction industry has forced the cement industry, its major support, to focus on achieving maximum productivity to meet the growing demand for this material. Statistics indicate that the demand for cement rose from 1.6 billion metric tons (bmt) in 2000 to 4bmt in 2013. This means that the reliability of a production system needs to be at the highest level that can be achieved by good maintenance. This paper studies the extent to which the implementation of RCM is needed as a strategy for increasing the reliability of the production systems component can be increased, thus ensuring continuous productivity. In a case study of four Libyan cement factories, 80 employees were surveyed and 12 top and middle managers interviewed. It is evident that these factories usually breakdown more often than once per month which has led to a decline in productivity, they cannot produce more than 50% of their designed capacity. This has resulted from the poor reliability of their production systems as a result of poor or insufficient maintenance. It has been found that most of the factories’ employees misunderstand maintenance and its importance. The main cause of this problem is the lack of qualified and trained staff, but in addition, it has been found that most employees are not found to be motivated as a result of a lack of management support and interest. In response to these findings, it has been suggested that the RCM strategy should be implemented in the four factories. The paper shows the importance of considering the development of maintenance strategies through the implementation of RCM in these factories. The purpose of it would be to overcome the problems that could reduce the level of reliability of the production systems. This study could be a useful source of information for academic researchers and the industrial organisations which are still experiencing problems in maintenance practices.

Keywords: Libyan cement industry, reliability centred maintenance, maintenance, production, reliability

Procedia PDF Downloads 384
1396 Mitochondrial Apolipoprotein A-1 Binding Protein Promotes Repolarization of Inflammatory Macrophage by Repairing Mitochondrial Respiration

Authors: Hainan Chen, Jina Qing, Xiao Zhu, Ling Gao, Ampadu O. Jackson, Min Zhang, Kai Yin

Abstract:

Objective: Editing macrophage activation to dampen inflammatory diseases by promoting the repolarization of inflammatory (M1) macrophages to anti-inflammatory (M2) macrophages is highly associated with mitochondrial respiration. Recent studies have suggested that mitochondrial apolipoprotein A-1 binding protein (APOA1BP) was essential for the cellular metabolite NADHX repair to NADH, which is necessary for the mitochondrial function. The exact role of APOA1BP in the repolarization of M1 to M2, however, is uncertain. Material and method: THP-1-derived macrophages were incubated with LPS (10 ng/ml) or/and IL-4 (100 U/ml) for 24 hours. Biochemical parameters of oxidative phosphorylation and M1/M2 markers were analyzed after overexpression of APOA1BP in cells. Results: Compared with control and IL-4-exposed M2 cells, APOA1BP was downregulated in M1 macrophages. APOA1BP restored the decline in mitochondrial function to improve metabolic and phenotypic reprogramming of M1 to M2 macrophages. Blocking oxidative phosphorylation by oligomycin blunts the effects of APOA1BP on M1 to M2 repolarization. Mechanistically, LPS triggered the hydration of NADH and increased its hydrate NADHX which inhibit cellular NADH dehydrogenases, a key component of electron transport chain for oxidative phosphorylation. APOA1BP decreased the level of NADHX via converting R-NADHX to biologically useful S-NADHX. The mutant of APOA1BP aspartate188, the binding site of NADHX, fail to repair oxidative phosphorylation, thereby preventing repolarization. Conclusions: Restoring mitochondrial function by increasing mitochondrial APOA1BP might be useful to improve the reprogramming of inflammatory macrophages into anti-inflammatory cells to control inflammatory diseases.

Keywords: inflammatory diseases, macrophage repolarization, mitochondrial respiration, apolipoprotein A-1 binding protein, NADHX, NADH

Procedia PDF Downloads 168
1395 Accuracy of VCCT for Calculating Stress Intensity Factor in Metal Specimens Subjected to Bending Load

Authors: Sanjin Kršćanski, Josip Brnić

Abstract:

Virtual Crack Closure Technique (VCCT) is a method used for calculating stress intensity factor (SIF) of a cracked body that is easily implemented on top of basic finite element (FE) codes and as such can be applied on the various component geometries. It is a relatively simple method that does not require any special finite elements to be used and is usually used for calculating stress intensity factors at the crack tip for components made of brittle materials. This paper studies applicability and accuracy of VCCT applied on standard metal specimens containing trough thickness crack, subjected to an in-plane bending load. Finite element analyses were performed using regular 4-node, regular 8-node and a modified quarter-point 8-node 2D elements. Stress intensity factor was calculated from the FE model results for a given crack length, using data available from FE analysis and a custom programmed algorithm based on virtual crack closure technique. Influence of the finite element size on the accuracy of calculated SIF was also studied. The final part of this paper includes a comparison of calculated stress intensity factors with results obtained from analytical expressions found in available literature and in ASTM standard. Results calculated by this algorithm based on VCCT were found to be in good correlation with results obtained with mentioned analytical expressions.

Keywords: VCCT, stress intensity factor, finite element analysis, 2D finite elements, bending

Procedia PDF Downloads 298
1394 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals

Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou

Abstract:

In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.

Keywords: continuous wavelet transform, convolution neural net-work, gated recurrent unit, health indicators, remaining useful life

Procedia PDF Downloads 128
1393 Nonlinear Aerodynamic Parameter Estimation of a Supersonic Air to Air Missile by Using Artificial Neural Networks

Authors: Tugba Bayoglu

Abstract:

Aerodynamic parameter estimation is very crucial in missile design phase, since accurate high fidelity aerodynamic model is required for designing high performance and robust control system, developing high fidelity flight simulations and verification of computational and wind tunnel test results. However, in literature, there is not enough missile aerodynamic parameter identification study for three main reasons: (1) most air to air missiles cannot fly with constant speed, (2) missile flight test number and flight duration are much less than that of fixed wing aircraft, (3) variation of the missile aerodynamic parameters with respect to Mach number is higher than that of fixed wing aircraft. In addition to these challenges, identification of aerodynamic parameters for high wind angles by using classical estimation techniques brings another difficulty in the estimation process. The reason for this, most of the estimation techniques require employing polynomials or splines to model the behavior of the aerodynamics. However, for the missiles with a large variation of aerodynamic parameters with respect to flight variables, the order of the proposed model increases, which brings computational burden and complexity. Therefore, in this study, it is aimed to solve nonlinear aerodynamic parameter identification problem for a supersonic air to air missile by using Artificial Neural Networks. The method proposed will be tested by using simulated data which will be generated with a six degree of freedom missile model, involving a nonlinear aerodynamic database. The data will be corrupted by adding noise to the measurement model. Then, by using the flight variables and measurements, the parameters will be estimated. Finally, the prediction accuracy will be investigated.

Keywords: air to air missile, artificial neural networks, open loop simulation, parameter identification

Procedia PDF Downloads 270