Search results for: pulmonary function test
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13537

Search results for: pulmonary function test

2317 An Evolutionary Approach for QAOA for Max-Cut

Authors: Francesca Schiavello

Abstract:

This work aims to create a hybrid algorithm, combining Quantum Approximate Optimization Algorithm (QAOA) with an Evolutionary Algorithm (EA) in the place of traditional gradient based optimization processes. QAOA’s were first introduced in 2014, where, at the time, their algorithm performed better than the traditional best known classical algorithm for Max-cut graphs. Whilst classical algorithms have improved since then and have returned to being faster and more efficient, this was a huge milestone for quantum computing, and their work is often used as a benchmarking tool and a foundational tool to explore variants of QAOA’s. This, alongside with other famous algorithms like Grover’s or Shor’s, highlights to the world the potential that quantum computing holds. It also presents the reality of a real quantum advantage where, if the hardware continues to improve, this could constitute a revolutionary era. Given that the hardware is not there yet, many scientists are working on the software side of things in the hopes of future progress. Some of the major limitations holding back quantum computing are the quality of qubits and the noisy interference they generate in creating solutions, the barren plateaus that effectively hinder the optimization search in the latent space, and the availability of number of qubits limiting the scale of the problem that can be solved. These three issues are intertwined and are part of the motivation for using EAs in this work. Firstly, EAs are not based on gradient or linear optimization methods for the search in the latent space, and because of their freedom from gradients, they should suffer less from barren plateaus. Secondly, given that this algorithm performs a search in the solution space through a population of solutions, it can also be parallelized to speed up the search and optimization problem. The evaluation of the cost function, like in many other algorithms, is notoriously slow, and the ability to parallelize it can drastically improve the competitiveness of QAOA’s with respect to purely classical algorithms. Thirdly, because of the nature and structure of EA’s, solutions can be carried forward in time, making them more robust to noise and uncertainty. Preliminary results show that the EA algorithm attached to QAOA can perform on par with the traditional QAOA with a Cobyla optimizer, which is a linear based method, and in some instances, it can even create a better Max-Cut. Whilst the final objective of the work is to create an algorithm that can consistently beat the original QAOA, or its variants, due to either speedups or quality of the solution, this initial result is promising and show the potential of EAs in this field. Further tests need to be performed on an array of different graphs with the parallelization aspect of the work commencing in October 2023 and tests on real hardware scheduled for early 2024.

Keywords: evolutionary algorithm, max cut, parallel simulation, quantum optimization

Procedia PDF Downloads 46
2316 A Machine Learning Approach for Detecting and Locating Hardware Trojans

Authors: Kaiwen Zheng, Wanting Zhou, Nan Tang, Lei Li, Yuanhang He

Abstract:

The integrated circuit industry has become a cornerstone of the information society, finding widespread application in areas such as industry, communication, medicine, and aerospace. However, with the increasing complexity of integrated circuits, Hardware Trojans (HTs) implanted by attackers have become a significant threat to their security. In this paper, we proposed a hardware trojan detection method for large-scale circuits. As HTs introduce physical characteristic changes such as structure, area, and power consumption as additional redundant circuits, we proposed a machine-learning-based hardware trojan detection method based on the physical characteristics of gate-level netlists. This method transforms the hardware trojan detection problem into a machine-learning binary classification problem based on physical characteristics, greatly improving detection speed. To address the problem of imbalanced data, where the number of pure circuit samples is far less than that of HTs circuit samples, we used the SMOTETomek algorithm to expand the dataset and further improve the performance of the classifier. We used three machine learning algorithms, K-Nearest Neighbors, Random Forest, and Support Vector Machine, to train and validate benchmark circuits on Trust-Hub, and all achieved good results. In our case studies based on AES encryption circuits provided by trust-hub, the test results showed the effectiveness of the proposed method. To further validate the method’s effectiveness for detecting variant HTs, we designed variant HTs using open-source HTs. The proposed method can guarantee robust detection accuracy in the millisecond level detection time for IC, and FPGA design flows and has good detection performance for library variant HTs.

Keywords: hardware trojans, physical properties, machine learning, hardware security

Procedia PDF Downloads 133
2315 Impact of Charging PHEV at Different Penetration Levels on Power System Network

Authors: M. R. Ahmad, I. Musirin, M. M. Othman, N. A. Rahmat

Abstract:

Plug-in Hybrid-Electric Vehicle (PHEV) has gained immense popularity in recent years. PHEV offers numerous advantages compared to the conventional internal-combustion engine (ICE) vehicle. Millions of PHEVs are estimated to be on the road in the USA by 2020. Uncoordinated PHEV charging is believed to cause severe impacts to the power grid; i.e. feeders, lines and transformers overload and voltage drop. Nevertheless, improper PHEV data model used in such studies may cause the findings of their works is in appropriated. Although smart charging is more attractive to researchers in recent years, its implementation is not yet attainable on the street due to its requirement for physical infrastructure readiness and technology advancement. As the first step, it is finest to study the impact of charging PHEV based on real vehicle travel data from National Household Travel Survey (NHTS) and at present charging rate. Due to the lack of charging station on the street at the moment, charging PHEV at home is the best option and has been considered in this work. This paper proposed a technique that comprehensively presents the impact of charging PHEV on power system networks considering huge numbers of PHEV samples with its traveling data pattern. Vehicles Charging Load Profile (VCLP) is developed and implemented in IEEE 30-bus test system that represents a portion of American Electric Power System (Midwestern US). Normalization technique is used to correspond to real time loads at all buses. Results from the study indicated that charging PHEV using opportunity charging will have significant impacts on power system networks, especially whereas bigger battery capacity (kWh) is used as well as for higher penetration level.

Keywords: plug-in hybrid electric vehicle, transportation electrification, impact of charging PHEV, electricity demand profile, load profile

Procedia PDF Downloads 266
2314 Evaluating the Dosimetric Performance for 3D Treatment Planning System for Wedged and Off-Axis Fields

Authors: Nashaat A. Deiab, Aida Radwan, Mohamed S. Yahiya, Mohamed Elnagdy, Rasha Moustafa

Abstract:

This study is to evaluate the dosimetric performance of our institution's 3D treatment planning system for wedged and off-axis 6MV photon beams, guided by the recommended QA tests documented in the AAPM TG53; NCS report 15 test packages, IAEA TRS 430 and ESTRO booklet no.7. The study was performed for Elekta Precise linear accelerator designed for clinical range of 4, 6 and 15 MV photon beams with asymmetric jaws and fully integrated multileaf collimator that enables high conformance to target with sharp field edges. Ten tests were applied on solid water equivalent phantom along with 2D array dose detection system. The calculated doses using 3D treatment planning system PrecisePLAN were compared with measured doses to make sure that the dose calculations are accurate for simple situations such as square and elongated fields, different SSD, beam modifiers e.g. wedges, blocks, MLC-shaped fields and asymmetric collimator settings. The QA results showed dosimetric accuracy of the TPS within the specified tolerance limits. Except for large elongated wedged field, the central axis and outside central axis have errors of 0.2% and 0.5%, respectively, and off- planned and off-axis elongated fields the region outside the central axis of the beam errors are 0.2% and 1.1%, respectively. The dosimetric investigated results yielded differences within the accepted tolerance level as recommended. Differences between dose values predicted by the TPS and measured values at the same point are the result from limitations of the dose calculation, uncertainties in the measurement procedure, or fluctuations in the output of the accelerator.

Keywords: quality assurance, dose calculation, wedged fields, off-axis fields, 3D treatment planning system, photon beam

Procedia PDF Downloads 428
2313 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation

Authors: W. Meron Mebrahtu, R. Absi

Abstract:

Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.

Keywords: accuracy, eddy viscosity, sewers, velocity profile

Procedia PDF Downloads 99
2312 Investigating the Relationship Between the Auditor’s Personality Type and the Quality of Financial Reporting in Companies Listed on the Tehran Stock Exchange

Authors: Seyedmohsen Mortazavi

Abstract:

The purpose of this research is to investigate the personality types of internal auditors on the quality of financial reporting in companies admitted to the Tehran Stock Exchange. Personality type is one of the issues that emphasizes the field of auditors' behavior, and this field has attracted the attention of shareholders and stock companies today, because the auditors' personality can affect the type of financial reporting and its quality. The research is applied in terms of purpose and descriptive and correlational in terms of method, and a researcher-made questionnaire was used to check the research hypotheses. The statistical population of the research is all the auditors, accountants and financial managers of the companies admitted to the Tehran Stock Exchange, and due to their large number and the uncertainty of their exact number, 384 people have been considered as a statistical sample using Morgan's table. The researcher-made questionnaire was approved by experts in the field, and then its validity and reliability were obtained using software. For the validity of the questionnaire, confirmatory factor analysis was first examined, and then using divergent and convergent validity; Fornell-Larker and cross-sectional load test of the validity of the questionnaire were confirmed; Then, the reliability of the questionnaire was examined using Cronbach's alpha and composite reliability, and the results of these two tests showed the appropriate reliability of the questionnaire. After checking the validity and reliability of the research hypotheses, PLS software was used to check the hypotheses. The results of the research showed that the personalities of internal auditors can affect the quality of financial reporting; The personalities investigated in this research include neuroticism, extroversion, flexibility, agreeableness and conscientiousness, all of these personality types can affect the quality of financial reporting.

Keywords: flexibility, quality of financial reporting, agreeableness, conscientiousness

Procedia PDF Downloads 87
2311 Inflation and Unemployment Rates as Indicators of the Transition European Union Countries Monetary Policy Orientation

Authors: Elza Jurun, Damir Piplica, Tea Poklepović

Abstract:

Numerous studies carried out in the developed western democratic countries have shown that the ideological framework of the governing party has a significant influence on the monetary policy. The executive authority consisting of a left-wing party gives a higher weight to unemployment suppression and central bank implements a more expansionary monetary policy. On the other hand, right-wing governing party considers the monetary stability to be more important than unemployment suppression and in such a political framework the main macroeconomic objective becomes the inflation rate reduction. The political framework conditions in the transition countries which are new European Union (EU) members are still highly specific in relation to the other EU member countries. In the focus of this paper is the question whether the same monetary policy principles are valid in these transitional countries as well as they apply in developed western democratic EU member countries. The data base consists of inflation rate and unemployment rate for 11 transitional EU member countries covering the period from 2001 to 2012. The essential information for each of these 11 countries and for each year of the observed period is right or left political orientation of the ruling party. In this paper we use t-statistics to test our hypothesis that there are differences in inflation and unemployment between right and left political orientation of the governing party. To explore the influence of different countries, through years and different political orientations descriptive statistics is used. Inflation and unemployment should be strongly negatively correlated through time, which is tested using Pearson correlation coefficient. Regarding the fact whether the governing authority is consisted from left or right politically oriented parties, monetary authorities will adjust its policy setting the higher priority on lower inflation or unemployment reduction.

Keywords: inflation rate, monetary policy orientation, transition EU countries, unemployment rate

Procedia PDF Downloads 429
2310 Theta-Phase Gamma-Amplitude Coupling as a Neurophysiological Marker in Neuroleptic-Naive Schizophrenia

Authors: Jun Won Kim

Abstract:

Objective: Theta-phase gamma-amplitude coupling (TGC) was used as a novel evidence-based tool to reflect the dysfunctional cortico-thalamic interaction in patients with schizophrenia. However, to our best knowledge, no studies have reported the diagnostic utility of the TGC in the resting-state electroencephalographic (EEG) of neuroleptic-naive patients with schizophrenia compared to healthy controls. Thus, the purpose of this EEG study was to understand the underlying mechanisms in patients with schizophrenia by comparing the TGC at rest between two groups and to evaluate the diagnostic utility of TGC. Method: The subjects included 90 patients with schizophrenia and 90 healthy controls. All patients were diagnosed with schizophrenia according to the criteria of Diagnostic and Statistical Manual of Mental Disorders, 4th edition (DSM-IV) by two independent psychiatrists using semi-structured clinical interviews. Because patients were either drug-naïve (first episode) or had not been taking psychoactive drugs for one month before the study, we could exclude the influence of medications. Five frequency bands were defined for spectral analyses: delta (1–4 Hz), theta (4–8 Hz), slow alpha (8–10 Hz), fast alpha (10–13.5 Hz), beta (13.5–30 Hz), and gamma (30-80 Hz). The spectral power of the EEG data was calculated with fast Fourier Transformation using the 'spectrogram.m' function of the signal processing toolbox in Matlab. An analysis of covariance (ANCOVA) was performed to compare the TGC results between the groups, which were adjusted using a Bonferroni correction (P < 0.05/19 = 0.0026). Receiver operator characteristic (ROC) analysis was conducted to examine the discriminating ability of the TGC data for schizophrenia diagnosis. Results: The patients with schizophrenia showed a significant increase in the resting-state TGC at all electrodes. The delta, theta, slow alpha, fast alpha, and beta powers showed low accuracies of 62.2%, 58.4%, 56.9%, 60.9%, and 59.0%, respectively, in discriminating the patients with schizophrenia from the healthy controls. The ROC analysis performed on the TGC data generated the most accurate result among the EEG measures, displaying an overall classification accuracy of 92.5%. Conclusion: As TGC includes phase, which contains information about neuronal interactions from the EEG recording, TGC is expected to be useful for understanding the mechanisms the dysfunctional cortico-thalamic interaction in patients with schizophrenia. The resting-state TGC value was increased in the patients with schizophrenia compared to that in the healthy controls and had a higher discriminating ability than the other parameters. These findings may be related to the compensatory hyper-arousal patterns of the dysfunctional default-mode network (DMN) in schizophrenia. Further research exploring the association between TGC and medical or psychiatric conditions that may confound EEG signals will help clarify the potential utility of TGC.

Keywords: quantitative electroencephalography (QEEG), theta-phase gamma-amplitude coupling (TGC), schizophrenia, diagnostic utility

Procedia PDF Downloads 126
2309 High-Resolution Flood Hazard Mapping Using Two-Dimensional Hydrodynamic Model Anuga: Case Study of Jakarta, Indonesia

Authors: Hengki Eko Putra, Dennish Ari Putro, Tri Wahyu Hadi, Edi Riawan, Junnaedhi Dewa Gede, Aditia Rojali, Fariza Dian Prasetyo, Yudhistira Satya Pribadi, Dita Fatria Andarini, Mila Khaerunisa, Raditya Hanung Prakoswa

Abstract:

Catastrophe risk management can only be done if we are able to calculate the exposed risks. Jakarta is an important city economically, socially, and politically and in the same time exposed to severe floods. On the other hand, flood risk calculation is still very limited in the area. This study has calculated the risk of flooding for Jakarta using 2-Dimensional Model ANUGA. 2-Dimensional model ANUGA and 1-Dimensional Model HEC-RAS are used to calculate the risk of flooding from 13 major rivers in Jakarta. ANUGA can simulate physical and dynamical processes between the streamflow against river geometry and land cover to produce a 1-meter resolution inundation map. The value of streamflow as an input for the model obtained from hydrological analysis on rainfall data using hydrologic model HEC-HMS. The probabilistic streamflow derived from probabilistic rainfall using statistical distribution Log-Pearson III, Normal and Gumbel, through compatibility test using Chi Square and Smirnov-Kolmogorov. Flood event on 2007 is used as a comparison to evaluate the accuracy of model output. Property damage estimations were calculated based on flood depth for 1, 5, 10, 25, 50, and 100 years return period against housing value data from the BPS-Statistics Indonesia, Centre for Research and Development of Housing and Settlements, Ministry of Public Work Indonesia. The vulnerability factor was derived from flood insurance claim. Jakarta's flood loss estimation for the return period of 1, 5, 10, 25, 50, and 100 years, respectively are Rp 1.30 t; Rp 16.18 t; Rp 16.85 t; Rp 21.21 t; Rp 24.32 t; and Rp 24.67 t of the total value of building Rp 434.43 t.

Keywords: 2D hydrodynamic model, ANUGA, flood, flood modeling

Procedia PDF Downloads 259
2308 Prognostic Impact of Pre-transplant Ferritinemia: A Survival Analysis Among Allograft Patients

Authors: Mekni Sabrine, Nouira Mariem

Abstract:

Background and aim: Allogeneic hematopoietic stem cell transplantation is a curative treatment for several hematological diseases; however, it has a non-negligible morbidity and mortality depending on several prognostic factors, including pre-transplant hyperferritinemia. The aim of our study was to estimate the impact of hyperferritinemia on survivals and on the occurrence of post-transplant complications. Methods: It was a longitudinal study conducted over 8 years and including all patients who had a first allograft. The impact of pretransplant hyperferritinemia (ferritinemia ≥1500) on survivals was studied using the Kaplan Meier method and the COX model for uni- and multivariate analysis. The Khi-deux test and binary logistic regression were used to study the association between pretransplant ferritinemia and post-transplant complications. Results: One hundred forty patients were included with an average age of 26.6 years and a sex ratio (M/F)=1.4. Hyperferritinemia was found in 33% of patients. It had no significant impact on either overall survival (p=0.9) or event -free survival (p=0.6). In multivariate analysis, only the type of disease was independently associated with overall survival (p=0.04) and event-free survival (p=0.002). For post-allograft complications: The occurrence of early documented infections was independently associated with pretransplant hyperferritinemia (p=0.02) and the presence of acute graft versus host disease( GVHD) (p<10-3). The occurrence of acute GVHD was associated with early documented infection (p=0.002) and Cytomegalovirus reactivation (p<10-3). The occurrence of chronic GVHD was associated with the presence of Cytomegalovirus reactivation (p=0.006) and graft source (p=0.009). Conclusion: Our study showed the significant impact of pre-transplant hyperferritinemia on the occurrence of early infections but not on survivals. Early and more accurate assessment iron overload by other tests such as liver magnetic resonance imaging with initiation of chelating treatment could prevent the occurrence of such complications after transplantation.

Keywords: allogeneic, transplants, ferritin, survival

Procedia PDF Downloads 54
2307 An Unsupervised Domain-Knowledge Discovery Framework for Fake News Detection

Authors: Yulan Wu

Abstract:

With the rapid development of social media, the issue of fake news has gained considerable prominence, drawing the attention of both the public and governments. The widespread dissemination of false information poses a tangible threat across multiple domains of society, including politics, economy, and health. However, much research has concentrated on supervised training models within specific domains, their effectiveness diminishes when applied to identify fake news across multiple domains. To solve this problem, some approaches based on domain labels have been proposed. By segmenting news to their specific area in advance, judges in the corresponding field may be more accurate on fake news. However, these approaches disregard the fact that news records can pertain to multiple domains, resulting in a significant loss of valuable information. In addition, the datasets used for training must all be domain-labeled, which creates unnecessary complexity. To solve these problems, an unsupervised domain knowledge discovery framework for fake news detection is proposed. Firstly, to effectively retain the multidomain knowledge of the text, a low-dimensional vector for each news text to capture domain embeddings is generated. Subsequently, a feature extraction module utilizing the unsupervisedly discovered domain embeddings is used to extract the comprehensive features of news. Finally, a classifier is employed to determine the authenticity of the news. To verify the proposed framework, a test is conducted on the existing widely used datasets, and the experimental results demonstrate that this method is able to improve the detection performance for fake news across multiple domains. Moreover, even in datasets that lack domain labels, this method can still effectively transfer domain knowledge, which can educe the time consumed by tagging without sacrificing the detection accuracy.

Keywords: fake news, deep learning, natural language processing, multiple domains

Procedia PDF Downloads 72
2306 Machine Learning Approach in Predicting Cracking Performance of Fiber Reinforced Asphalt Concrete Materials

Authors: Behzad Behnia, Noah LaRussa-Trott

Abstract:

In recent years, fibers have been successfully used as an additive to reinforce asphalt concrete materials and to enhance the sustainability and resiliency of transportation infrastructure. Roads covered with fiber-reinforced asphalt concrete (FRAC) require less frequent maintenance and tend to have a longer lifespan. The present work investigates the application of sasobit-coated aramid fibers in asphalt pavements and employs machine learning to develop prediction models to evaluate the cracking performance of FRAC materials. For the experimental part of the study, the effects of several important parameters such as fiber content, fiber length, and testing temperature on fracture characteristics of FRAC mixtures were thoroughly investigated. Two mechanical performance tests, i.e., the disk-shaped compact tension [DC(T)] and indirect tensile [ID(T)] strength tests, as well as the non-destructive acoustic emission test, were utilized to experimentally measure the cracking behavior of the FRAC material in both macro and micro level, respectively. The experimental results were used to train the supervised machine learning approach in order to establish prediction models for fracture performance of the FRAC mixtures in the field. Experimental results demonstrated that adding fibers improved the overall fracture performance of asphalt concrete materials by increasing their fracture energy, tensile strength and lowering their 'embrittlement temperature'. FRAC mixtures containing long-size fibers exhibited better cracking performance than regular-size fiber mixtures. The developed prediction models of this study could be easily employed by pavement engineers in the assessment of the FRAC pavements.

Keywords: fiber reinforced asphalt concrete, machine learning, cracking performance tests, prediction model

Procedia PDF Downloads 126
2305 Experimental Research on the Effect of Activating Temperature on Combustion and Nox Emission Characteristics of Pulverized Coal in a Novel Purification-combustion Reaction System

Authors: Ziqu Ouyang, Kun Su

Abstract:

A novel efficient and clean coal combustion system, namely the purification-combustion system, was designed by the Institute of Engineering Thermal Physics, Chinese Academy of Science, in 2022. Among them, the purification system was composed of a mesothermal activating unit and a hyperthermal reductive unit, and the combustion system was composed of a mild combustion system. In the purification-combustion system, the deep in-situ removal of coal-N could be realized by matching the temperature and atmosphere in each unit, and thus the NOx emission was controlled effectively. To acquire the methods for realizing the efficient and clean coal combustion, this study investigated the effect of the activating temperature (including 822 °C, 858 °C, 933 °C, 991 °C), which was the key factor affecting the system operation, on combustion and NOx emission characteristics of pulverized coal in a 30 kW purification-combustion test bench. The research result turned out that the activating temperature affected the combustion and NOx emission characteristics significantly. As the activating temperature increased, the temperature increased first and then decreased in the mild combustion unit, and the temperature change in the lower part was much higher than that in the upper part. Moreover, the main combustion region was always located at the top of the unit under different activating temperatures, and the combustion intensity along the unit was weakened gradually. Increasing the activating temperature excessively could destroy the reductive atmosphere early in the upper part of the unit, which wasn’t conducive to the full removal of coal-N in the reductive coal char. As the activating temperature increased, the combustion efficiency increased first and then decreased, while the NOx emission decreased first and then increased, illustrating that increasing the activating temperature properly promoted the efficient and clean coal combustion, but there was a limit to its growth. In this study, the optimal activating temperature was 858 °C. Hence, this research illustrated that increasing the activating temperature properly could realize the mutual matching of improving the combustion efficiency and reducing the NOx emission, and thus guaranteed the clean and efficient coal combustion well.

Keywords: activating temperature, combustion characteristics, nox emission, purification-combustion system

Procedia PDF Downloads 72
2304 Effect of Garlic Extract on Growth Performance and Immune System of Broiler

Authors: Merry Muspita Dyah Utami

Abstract:

The positive effect of garlic extract have been reported by many studies. It has antibiotical potential, antibacterial, antiviral, antiparasitic, antifungal, and growth promoting. Supplementary garlic for broilers could mediate in getting the bioactive compounds in garlic. The avian bursa must be essential for antibody-mediated immunity. The size of bursa of fabricius must be some sort of endocrine or lymphoid gland associated with growth and sexual development. The research was conducted to evaluate the effects of garlic extract on growth performance and immune system of broiler. Seventy-two day old chick were equally divided into four group, three replication and six chicks each. Group I was control without garlic extract, then garlic extraxt was administrated to the experimental group II, III and IV (2, 4, 6% in ration). The experiment was conducted for three weeks period from day old chick to 21 days. Body weight of broiler were determined at day 1 and 21, feed intake was determined at the same period, feed conversion ratio was calculated accordingly. At 21 day age, four birds per replicate were slaughtered , bursa was collected, weight and calculated as a percentage of live body weight. Mortality was recorded as it occurred and was used to ajust the total number of broiler to determine the total feed intake and feed conversion rasio. Data were expressed as the mean was compare by one way analysis of variance (Anova) follow by Duncan Test, which used to identify differences between groups. A value of P<0.05 was accepted as significance. The body weight, feed conversion rasio, and the weight of bursa of fabricius showed a significant differences, but feed consumption and the percentage of bursa of live body weight were not significantly different (P > 0.05) influenced by dietary treatments. The results of this research, garlic extract has a potential role as natural growth promoter and immunomodulatory system in broiler.

Keywords: garlic extract, growth, immunity, broiler

Procedia PDF Downloads 317
2303 An Exploration of the Association Between the Physical Activity and Academic Performance in Internship Medical Students

Authors: Ali Ashraf, Ghazaleh Aghaee, Sedigheh Samimian, Mohaya Farzin

Abstract:

Objectives: Previous studies have indicated the positive effect of physical activity and sports on different aspects of health, such as muscle endurance and sleep cycle. However, in university students, particularly medical students, who have limited time and a stressful lifestyle, there have been limited studies exploring this matter with proven statistical results. In this regard, this study aims to find out how regular physical activity can influence the academic performance of medical students during their internship period. Methods: This was a descriptive-analytical study. Overall, 160 medical students (including 80 women and 88 men) voluntarily participated in the study. The Baecke Physical Activity Questionnaire was applied to determine the student’s physical activity levels. The student's academic performance was determined based on their total average academic scores. The data were analyzed in SPSS version 16 software using the independent t-test, Pearson correlation, and linear regression. Results: The average age of the students was 26.0±1.5 years. Eighty-eight students (52.4%) were male, and 142 (84.5%) were single. The student's mean total average academic score was 16.2±1.2, and their average physical activity score was 8.3±1.1. The student's average academic score was not associated with their gender (P=0.427), marital status (P=0.645), and age (P=0.320). However, married students had a significantly lower physical activity level compared to single students (P=0.020). The results indicated a significant positive correlation between student's physical activity levels and average academic scores (r=+0.410 and P<0.001). This correlation was independent of the student’s age, gender, and marital status based on the regression analysis. Conclusion: The results of the current study suggested that the physical activity level in medical students was low to moderate in most cases, and there was a significant direct relationship between student’s physical activity level and academic performance, independent of age, gender, and marital status.

Keywords: exercise, education, physical activity, academic performance

Procedia PDF Downloads 25
2302 The Effect of Metal Transfer Modes on Mechanical Properties of 3CR12 Stainless Steel

Authors: Abdullah Kaymakci, Daniel M. Madyira, Ntokozo Nkwanyana

Abstract:

The effect of metal transfer modes on mechanical properties of welded 3CR12 stainless steel were investigated. This was achieved by butt welding 10 mm thick plates of 3CR12 in different positions while varying the welding positions for different metal transfer modes. The ASME IX: 2010 (Welding and Brazing Qualifications) code was used as a basis for welding variables. The material and the thickness of the base metal were kept constant together with the filler metal, shielding gas and joint types. The effect of the metal transfer modes on the microstructure and the mechanical properties of the 3CR12 steel was then investigated as it was hypothesized that the change in welding positions will affect the transfer modes partly due to the effect of gravity. The microscopic examination revealed that the substrate was characterized by dual phase microstructure, that is, alpha phase and beta phase grain structures. Using the spectroscopic examination results and the ferritic factor calculation had shown that the microstructure was expected to be ferritic-martensitic during air cooling process. The tested tensile strength and Charpy impact energy were measured to be 498 MPa and 102 J which were in line with mechanical properties given in the material certificate. The heat input in the material was observed to be greater than 1 kJ/mm which is the limiting factor for grain growth during the welding process. Grain growths were observed in the heat affected zone of the welded materials. Ferritic-martensitic microstructure was observed in the microstructure during the microscopic examination. The grain growth altered the mechanical properties of the test material. Globular down hand had higher mechanical properties than spray down hand. Globular vertical up had better mechanical properties than globular vertical down.

Keywords: welding, metal transfer modes, stainless steel, microstructure, hardness, tensile strength

Procedia PDF Downloads 242
2301 Textile Wastewater Ecotoxicity Abatement after Aerobic Granular Sludge Treatment and Advanced Oxidation Process

Authors: Ana M. T. Mata, Alexiane Ligneul

Abstract:

Textile effluents are usually heavily loaded with organic carbon and color compounds, the latter being azo dyes in an estimated 70% of the case effluent posing a major challenge in environmental protection. In this study, the ecotoxicity of simulated textile effluent after biological treatment with anaerobic and aerobic phase (aerobic granular sludge, AGS) and after advanced oxidation processes (AOP) namely ozonation and UV irradiation as post-treatment, were tested to evaluate the fitness of this treatments for ecotoxicity abatement. AGS treatment achieved an 80% removal in both COD and color. AOP was applied with the intention to mineralize the metabolites resulting from biodecolorization of the azo dye Acid Red 14, especially the stable aromatic amine (4-amino-1-naphthalenesulfonic acid, 4A1NS). The ecotoxicity evaluation was based on growth inhibition of the algae Pseudokirchneriella subcapitata following OECD TG 201 except regarding the medium, MBL medium was used instead. Five replicate control cultures and samples were performed with an average STD of 2.7% regarding specific algae growth rate determination. It was found that untreated textile effluent holds an inhibition of specific growth rate of 82%. AGS treatment by itself is able to lower ecotoxicity to 53%. This is probably due to the high color removal of the treatment. AOP post-treatment with Ozone and UV irradiation improves the ecotoxicity abatment to 49 and 43% inhibition respectively, less significantly than previously thought. Since over 85% of 4A1NS was removed by either of the AOP (followed by HPLC), an individual ecotoxicity test of 4A1NS was performed showing that 4A1NS does not inhibit algae growth (0% inhibition). It was concluded that AGS treatment is able by itself to achieve a significant ecotoxicity abatement of textile effluent. The cost-benefit of AOP as a post-treatment have to be better accessed since their application resulted in an improvement of only 10% regarding ecotoxicity effluent removal. It was also found that the 4A1NS amine had no apparent effect on ecotoxicity. Further studies will be conducted to study where ecotoxicity is coming from after AGS biological treatment and how to eliminate it.

Keywords: textile wastewate, ecotoxicity, aerobic granular sludge, AOP

Procedia PDF Downloads 149
2300 Mechanical and Microstructural Study of Photo-Aged Low Density Polyethylene (LDPE) Films

Authors: Meryem Imane Babaghayou, Abdelhafidi Asma

Abstract:

This study deals with the ageing of Blown extruded films of low-density polyethylene (LDPE), used for greenhouse covering. The LDPE have been subjected to climatic ageing in a sub-Saharan facility at Laghouat (Algeria) with direct exposure to sun. The microstructural changes in the films were analyzed by IRFT for different states of ageing. The mechanical characterization was performed on a uniaxial tensile apparatus. The mechanical properties such as Young's modulus, strain at break, and stress at break have been followed for different states of exposure time (0 to 6 months). The Climatic ageing of LDPE films shows the effect of ageing on the microstructural Plan which leads to: i) To an oxidation of the molecular chains. ii) To the formation of cross-linkings and breaking chains, which both of them are responsible for the mechanical behavior’s modifications of the material. Cross-links are in favor of strengthening of the mechanical properties at break (the increase of σr and εr). In other side, the chains breaking leads to a decrease of these properties. The increase in the Young's modulus also seems to be related to those structural changes since the cross-links increase the average molecular weight. Branchings and tangles are favorable pairs for the ductile nature of the material. And in other side, the chains breaking reduces the average molecular weight and therefore promotes the stiffening (following to morphological changes) so the material becomes fragile. The post-mortem analysis of the samples shows that the mechanical stress has an effect on the molecular structure of the material. Although if quantitatively the concentrations of different chemical species exchanges, from a quantitative point of view only the unsaturations raises the polemics of a possible microstructural modification induced by mechanical stress applied during the tensile test. Also, we recommend a more rigorous analysis with other means of investigation.

Keywords: low-density polyethylene, ageing, mechanical properties, IRTF

Procedia PDF Downloads 347
2299 Performance Evaluation and Plugging Characteristics of Controllable Self-Aggregating Colloidal Particle Profile Control Agent

Authors: Zhiguo Yang, Xiangan Yue, Minglu Shao, Yue Yang, Rongjie Yan

Abstract:

It is difficult to realize deep profile control because of the small pore-throats and easy water channeling in low-permeability heterogeneous reservoir, and the traditional polymer microspheres have the contradiction between injection and plugging. In order to solve this contradiction, the controllable self-aggregating colloidal particles (CSA) containing amide groups on the surface of microspheres was prepared based on emulsion polymerization of styrene and acrylamide. The dispersed solution of CSA colloidal particles, whose particle size is much smaller than the diameter of pore-throats, was injected into the reservoir. When the microspheres migrated to the deep part of reservoir, , these CSA colloidal particles could automatically self-aggregate into large particle clusters under the action of the shielding agent and the control agent, so as to realize the plugging of the water channels. In this paper, the morphology, temperature resistance and self-aggregation properties of CSA microspheres were studied by transmission electron microscopy (TEM) and bottle test. The results showed that CSA microspheres exhibited heterogeneous core-shell structure, good dispersion, and outstanding thermal stability. The microspheres remain regular and uniform spheres at 100℃ after aging for 35 days. With the increase of the concentration of the cations, the self-aggregation time of CSA was gradually shortened, and the influence of bivalent cations was greater than that of monovalent cations. Core flooding experiments showed that CSA polymer microspheres have good injection properties, CSA particle clusters can effective plug the water channels and migrate to the deep part of the reservoir for profile control.

Keywords: heterogeneous reservoir, deep profile control, emulsion polymerization, colloidal particles, plugging characteristic

Procedia PDF Downloads 214
2298 Use of Metallic and Bimetallic Nanostructures as Constituents of Active Bio-Based Films

Authors: Lina F. Ballesteros, Hafsae Lamsaf, Miguel A. Cerqueira, Lorenzo M. Pastrana, Sandra Carvalho, Jose A. Teixeira, S. Calderon V.

Abstract:

The use of bio-based packaging materials containing metallic and bimetallic nanostructures is relatively modern technology. In this sense, the food packaging industry has been investigating biological and renewable resources that can replace petroleum-based materials to reduce the environmental impact and, at the same time, including new functionalities using nanotechnology. Therefore, the main objective of the present work consisted of developing bio-based poly-lactic acid (PLA) films with Zinc (Zn) and Zinc-Iron (Zn-Fe) nanostructures deposited by magnetron sputtering. The structural, antimicrobial, and optical properties of the films were evaluated when exposed at 60% and 96% relative humidity (RH). The morphology and elemental analysis of the samples were determined by scanning (transmission) electron microscopy (SEM and STEM), and inductively coupled plasma optical emission spectroscopy (ICP-OES). The structure of the PLA was monitored before and after deposition by Fourier transform infrared spectroscopy (FTIR) analysis, and the antimicrobial and color assays were performed by using the zone of inhibition (ZOI) test and a Minolta colorimeter, respectively. Finally, the films were correlated in terms of the deposit conditions, Zn or Zn-Fe concentrations, and thickness. The results revealed PLA films with different morphologies, compositions, and thicknesses of Zn or Zn-Fe nanostructures. The samples showed a significant antibacterial and antifungal activity against E. coli, P. aeruginosa, P. fluorescens, S. aureus, and A. niger, and considerable changes of color and opacity at 96% RH, especially for the thinner nanostructures (150-250 nm). On the other hand, when the Fe fraction was increased, the lightness of samples increased, as well as their antimicrobial activity when compared to the films with pure Zn. Hence, these findings are relevant to the food packaging field since intelligent and active films with multiple properties can be developed.

Keywords: biopolymers, functional properties, magnetron sputtering, Zn and Zn-Fe nanostructures

Procedia PDF Downloads 102
2297 Human-Elephant Conflict and Mitigation Measures in Buffer Zone of Bardia National Park, Nepal

Authors: Rabin Paudel, Dambar Bahadur Mahato, Prabin Poudel, Bijaya Neupane, Sakar Jha

Abstract:

Understanding Human-Elephant Conflict (HEC) is very important in countries like Nepal, where solutions to escalating conflicts are urgently required. However, most of the HEC mitigation measures implemented so far have been done on an ad hoc basis without the detailed understanding of nature and extent of the damage. This study aims to assess the current scenario of HEC in regards to crop and property damages by Wild Asian Elephant and people’s perception towards existing mitigating measures and elephant conservation in Buffer zone area of Bardia National Park. The methods used were a questionnaire survey (N= 178), key-informant interview (N= 18) and focal group discussions (N= 6). Descriptive statistics were used to determine the nature and extent of damage and to understand people’s perception towards HEC, its mitigation measures and elephant conservation. Chi-square test was applied to determine the significance of crop and property damages with respect to distance from the park boundary. Out of all types of damage, crop damage was found to be the highest (51%), followed by house damage (31%) and damage to stored grains (18%) with winter being the season with the greatest elephant damage. Among 178 respondents, the majority of them (82%) were positive towards elephant conservation despite the increment in HEC incidents as perceived by 88% of total respondents. Among the mitigation measures present, the most applied was electric fence (91%) followed by barbed wire fence (5%), reinforced concrete cement wall (3%) and gabion wall (1%). Most effective mitigation measures were reinforced concrete cement wall and gabion wall. To combat increasing crop damage, the insurance policy should be initiated. The efficiency of the mitigation measures should be timely monitored, and corrective measures should be applied as per the need.

Keywords: crop and property damage, elephant conflict, Asiatic wild elephant, mitigation measures

Procedia PDF Downloads 138
2296 Dual-use UAVs in Armed Conflicts: Opportunities and Risks for Cyber and Electronic Warfare

Authors: Piret Pernik

Abstract:

Based on strategic, operational, and technical analysis of the ongoing armed conflict in Ukraine, this paper will examine the opportunities and risks of using small commercial drones (dual-use unmanned aerial vehicles, UAV) for military purposes. The paper discusses the opportunities and risks in the information domain, encompassing both cyber and electromagnetic interference and attacks. The paper will draw conclusions on a possible strategic impact to the battlefield outcomes in the modern armed conflicts by the widespread use of dual-use UAVs. This article will contribute to filling the gap in the literature by examining based on empirical data cyberattacks and electromagnetic interference. Today, more than one hundred states and non-state actors possess UAVs ranging from low cost commodity models, widely are dual-use, available and affordable to anyone, to high-cost combat UAVs (UCAV) with lethal kinetic strike capabilities, which can be enhanced with Artificial Intelligence (AI) and Machine Learning (ML). Dual-use UAVs have been used by various actors for intelligence, reconnaissance, surveillance, situational awareness, geolocation, and kinetic targeting. Thus they function as force multipliers enabling kinetic and electronic warfare attacks and provide comparative and asymmetric operational and tactical advances. Some go as far as argue that automated (or semi-automated) systems can change the character of warfare, while others observe that the use of small drones has not changed the balance of power or battlefield outcomes. UAVs give considerable opportunities for commanders, for example, because they can be operated without GPS navigation, makes them less vulnerable and dependent on satellite communications. They can and have been used to conduct cyberattacks, electromagnetic interference, and kinetic attacks. However, they are highly vulnerable to those attacks themselves. So far, strategic studies, literature, and expert commentary have overlooked cybersecurity and electronic interference dimension of the use of dual use UAVs. The studies that link technical analysis of opportunities and risks with strategic battlefield outcomes is missing. It is expected that dual use commercial UAV proliferation in armed and hybrid conflicts will continue and accelerate in the future. Therefore, it is important to understand specific opportunities and risks related to the crowdsourced use of dual-use UAVs, which can have kinetic effects. Technical countermeasures to protect UAVs differ depending on a type of UAV (small, midsize, large, stealth combat), and this paper will offer a unique analysis of small UAVs both from the view of opportunities and risks for commanders and other actors in armed conflict.

Keywords: dual-use technology, cyber attacks, electromagnetic warfare, case studies of cyberattacks in armed conflicts

Procedia PDF Downloads 87
2295 An Assessment of Housing Affordability and Safety Measures in the Varied Residential Area of Lagos, A Case Study of the Amuwo-Odofin Local Government Area in Lagos State

Authors: Jubril Olatunbosun Akinde

Abstract:

Unplanned population growth are mostly attributed to a lack of infrastructural facilities and poor economic condition in the rural dwellings and the incidence of rural-urban migration, which has resulted in severe housing deficiency in the urban centre, with a resultant pressure on housing delivery in the cities. Affordable housing does not only encompass environmental factors that make living acceptable and comfortable, which include good access routes, ventilation, sanitation and access to other basic human needs, which include water and safety. The research assessed the housing affordability and safety measures in the varied residential area of lagos by examining the demographic and socioeconomic attributes of residents; examining the existing residential safety measures; by examining the residential quality in terms of safety; the researcher therefore examined if relationship between housing affordability and safety in the varied residential areas. The research adopted the bartlett, kotrlik and higgins (2001) method of t-test to determine the sample size which specifies different populations at different levels of significance (α). The researcher adopted primary data which was sourced from a field survey where the sample population was simply randomly selected to give a member of the population an equal chance of being selected, therefore, the sample size for the field survey was two hundred (200) respondents, and subjected to necessary testing. The research come to conclusion that housing safety and security is the responsibility of every resident, the landlords/landladies possess a better sense of security in their neighbourhood than renters in the community, therefore they need to be aware of their responsibility of ensuring the safety of lives and property.

Keywords: housing, housing affordability, housing security, residential, residential quality

Procedia PDF Downloads 96
2294 The Nexus between Child Marriage and Women Empowerment with Physical Violence in Two Culturally Distinct States of India

Authors: Jayakant Singh, Enu Anand

Abstract:

Background: Child marriage is widely prevalent in India. It is a form of gross human right violation that succumbs a child bride to be subservient to her husband within a marital relation. We investigated the relationship between age at marriage of women and her level of empowerment with physical violence experienced 12 months preceding the survey among young women aged 20-24 in two culturally distinct states- Bihar and Tamil Nadu of India. Methods: We used the information collected from 10514 young married women (20-24 years) at all India level, 373 in Bihar and 523 in Tamil Nadu from the third round of National Family Health Survey. Empowerment index was calculated using different parameters such as mobility, economic independence and decision making power of women using Principal Component Analysis method. Bivariate analysis was performed primarily using chi square for the test of significance. Logistic regression was carried out to assess the effect of age at marriage and empowerment on physical violence. Results: Lower level of women empowerment was significantly associated with physical violence in Tamil Nadu (OR=2.38, p<0.01) whereas child marriage (marriage before age 15) was associated with physical violence in Bihar (OR=3.27, p<0.001). The mean difference in age at marriage between those who experienced physical violence and those who did not experience varied by 7 months in Bihar and 10 months in Tamil Nadu. Conclusion: Culture specific intervention may be a key to reduction of violence against women as the results showed association of different factors contributing to physical violence in Bihar and Tamil Nadu. Marrying at an appropriate age perhaps is protective of abuse because it equips a woman to assert her rights effectively. It calls for an urgent consideration to curb both violence and child marriage with stricter involvement of family, civil society and the government. In the meanwhile physical violence may be recognized as a public health problem and integrate appropriate treatment to the victims within the health care institution.

Keywords: child marriage, empowerment, India, physical violence

Procedia PDF Downloads 297
2293 Characteristics of Middle Grade Students' Solution Strategies While Reasoning the Correctness of the Statements Related to Numbers

Authors: Ayşegül Çabuk, Mine Işıksal

Abstract:

Mathematics is a sense-making activity so that it requires meaningful learning. Hence based on this idea, meaningful mathematical connections are necessary to learn mathematics. At that point, the major question has become that which educational methods can provide opportunities to provide mathematical connections and to understand mathematics. The amalgam of reasoning and proof can be the one of the methods that creates opportunities to learn mathematics in a meaningful way. However, even if reasoning and proof should be included from prekindergarten to grade 12, studies in literature generally include secondary school students and pre-service mathematics teachers. With the light of the idea that the amalgam of reasoning and proof has significant effect on middle school students' mathematical learning, this study aims to investigate middle grade students' tendencies while reasoning the correctness of statements related to numbers. The sample included 272 middle grade students, specifically 69 of them were sixth grade students (25.4%), 101 of them were seventh grade students (37.1%) and 102 of them were eighth grade students (37.5%). Data was gathered through an achievement test including 2 essay types of problems about algebra. The answers of two items were analyzed both quantitatively and qualitatively in terms of students' solutions strategies while reasoning the correctness of the statements. Similar on the findings in the literature, most of the students, in all grade levels, used numerical examples to judge the statements. Moreover the results also showed that the majority of these students appear to believe that providing one or more selected examples is sufficient to show the correctness of the statement. Hence based on the findings of the study, even students in earlier ages have proving and reasoning abilities their reasoning's generally based on the empirical evidences. Therefore, it is suggested that examples and example-based reasoning can be a fundamental role on to generate systematical reasoning and proof insight in earlier ages.

Keywords: reasoning, mathematics learning, middle grade students

Procedia PDF Downloads 410
2292 Synthesis of PVA/γ-Fe2O3 Used in Cancer Treatment by Hyperthermia

Authors: Sajjad Seifi Mofarah, S. K. Sadrnezhaad, Shokooh Moghadam, Javad Tavakoli

Abstract:

In recent years a new method of combination treatment for cancer has been developed and studied that has led to significant advancements in the field of cancer therapy. Hyperthermia is a traditional therapy that, along with a creation of a medically approved level of heat with the help of an alternating magnetic AC current, results in the destruction of cancer cells by heat. This paper gives details regarding the production of the spherical nanocomposite PVA/γ-Fe2O3 in order to be used for medical purposes such as tumor treatment by hyperthermia. To reach a suitable and evenly distributed temperature, the nanocomposite with core-shell morphology and spherical form within a 100 to 200 nanometer size was created using phase separation emulsion, in which the magnetic nano-particles γ-Fe2O3 with an average particle size of 20 nano-meters and with different percentages of 0.2, 0.4, 0.5, and 0.6 were covered by polyvinyl alcohol. The main concern in hyperthermia and heat treatment is achieving desirable specific absorption rate (SAR) and one of the most critical factors in SAR is particle size. In this project all attempts has been done to reach minimal size and consequently maximum SAR. The morphological analysis of the spherical structure of the nanocomposite PVA/γ-Fe2O3 was achieved by SEM analyses and the study of the chemical bonds created was made possible by FTIR analysis. To investigate the manner of magnetic nanocomposite particle size distribution a DLS experiment was conducted. Moreover, to determine the magnetic behavior of the γ-Fe2O3 particle and the nanocomposite PVA/γ-Fe2O3 in different concentrations a VSM test was conducted. To sum up, creating magnetic nanocomposites with a spherical morphology that would be employed for drug loading opens doors to new approaches in developing nanocomposites that provide efficient heat and a controlled release of drug simultaneously inside the magnetic field, which are among their positive characteristics that could significantly improve the recovery process in patients.

Keywords: nanocomposite, hyperthermia, cancer therapy, drug releasing

Procedia PDF Downloads 288
2291 Myomectomy and Blood Loss: A Quality Improvement Project

Authors: Ena Arora, Rong Fan, Aleksandr Fuks, Kolawole Felix Akinnawonu

Abstract:

Introduction: Leiomyomas are benign tumors that are derived from the overgrowth of uterine smooth muscle cells. Women with symptomatic leiomyomas who desire future fertility, myomectomy should be the standard surgical treatment. Perioperative hemorrhage is a common complication in myomectomy. We performed the study to investigate blood transfusion rate in abdominal myomectomies, risk factors influencing blood loss and modalities to improve perioperative blood loss. Methods: Retrospective chart review was done for patients who underwent myomectomy from 2016 to 2022 at Queens hospital center, New York. We looked at preoperative patient demographics, clinical characteristics, intraoperative variables, and postoperative outcomes. Mann-Whitney U test were used for parametric and non-parametric continuous variable comparisons, respectively. Results: A total of 159 myomectomies were performed between 2016 and 2022, including 1 laparoscopic, 65 vaginal and 93 abdominal. 44 patients received blood transfusion during or within 72 hours of abdominal myomectomy. The blood transfusion rate was 47.3%. Blood transfusion rate was found to be twice higher than the average documented rate in literature which is 20%. Risk factors identified were black race, preoperative hematocrit<30%, preoperative blood transfusion within 72 hours, large fibroid burden, prolonged surgical time, and abdominal approach. Conclusion: Preoperative optimization with iron supplements or GnRH agonists is important for patients undergoing myomectomy. Interventions to decrease intra operative blood loss should include cell saver, tourniquet, vasopressin, misoprostol, tranexamic acid and gelatin-thrombin matrix hemostatic sealant.

Keywords: myomectomy, perioperative blood loss, cell saver, tranexamic acid

Procedia PDF Downloads 69
2290 The Inverse Problem in Energy Beam Processes Using Discrete Adjoint Optimization

Authors: Aitor Bilbao, Dragos Axinte, John Billingham

Abstract:

The inverse problem in Energy Beam (EB) Processes consists of defining the control parameters, in particular the 2D beam path (position and orientation of the beam as a function of time), to arrive at a prescribed solution (freeform surface). This inverse problem is well understood for conventional machining, because the cutting tool geometry is well defined and the material removal is a time independent process. In contrast, EB machining is achieved through the local interaction of a beam of particular characteristics (e.g. energy distribution), which leads to a surface-dependent removal rate. Furthermore, EB machining is a time-dependent process in which not only the beam varies with the dwell time, but any acceleration/deceleration of the machine/beam delivery system, when performing raster paths will influence the actual geometry of the surface to be generated. Two different EB processes, Abrasive Water Machining (AWJM) and Pulsed Laser Ablation (PLA), are studied. Even though they are considered as independent different technologies, both can be described as time-dependent processes. AWJM can be considered as a continuous process and the etched material depends on the feed speed of the jet at each instant during the process. On the other hand, PLA processes are usually defined as discrete systems and the total removed material is calculated by the summation of the different pulses shot during the process. The overlapping of these shots depends on the feed speed and the frequency between two consecutive shots. However, if the feed speed is sufficiently slow compared with the frequency, then consecutive shots are close enough and the behaviour can be similar to a continuous process. Using this approximation a generic continuous model can be described for both processes. The inverse problem is usually solved for this kind of process by simply controlling dwell time in proportion to the required depth of milling at each single pixel on the surface using a linear model of the process. However, this approach does not always lead to the good solution since linear models are only valid when shallow surfaces are etched. The solution of the inverse problem is improved by using a discrete adjoint optimization algorithm. Moreover, the calculation of the Jacobian matrix consumes less computation time than finite difference approaches. The influence of the dynamics of the machine on the actual movement of the jet is also important and should be taken into account. When the parameters of the controller are not known or cannot be changed, a simple approximation is used for the choice of the slope of a step profile. Several experimental tests are performed for both technologies to show the usefulness of this approach.

Keywords: abrasive waterjet machining, energy beam processes, inverse problem, pulsed laser ablation

Procedia PDF Downloads 267
2289 Relationship among Teams' Information Processing Capacity and Performance in Information System Projects: The Effects of Uncertainty and Equivocality

Authors: Ouafa Sakka, Henri Barki, Louise Cote

Abstract:

Uncertainty and equivocality are defined in the information processing literature as two task characteristics that require different information processing responses from managers. As uncertainty often stems from a lack of information, addressing it is thought to require the collection of additional data. On the other hand, as equivocality stems from ambiguity and a lack of understanding of the task at hand, addressing it is thought to require rich communication between those involved. Past research has provided weak to moderate empirical support to these hypotheses. The present study contributes to this literature by defining uncertainty and equivocality at the project level and investigating their moderating effects on the association between several project information processing constructs and project performance. The information processing constructs considered are the amount of information collected by the project team, and the richness and frequency of formal communications among the team members to discuss the project’s follow-up reports. Data on 93 information system development (ISD) project managers was collected in a questionnaire survey and analyzed it via the Fisher Test for correlation differences. The results indicate that the highest project performance levels were observed in projects characterized by high uncertainty and low equivocality in which project managers were provided with detailed and updated information on project costs and schedules. In addition, our findings show that information about user needs and technical aspects of the project is less useful to managing projects where uncertainty and equivocality are high. Further, while the strongest positive effect of interactive use of follow-up reports on performance occurred in projects where both uncertainty and equivocality levels were high, its weakest effect occurred when both of these were low.

Keywords: uncertainty, equivocality, information processing model, management control systems, project control, interactive use, diagnostic use, information system development

Procedia PDF Downloads 272
2288 In vitro Antioxidant and Antisickling Effects of Aerva javanica, and Ficus palmata Extracts on Sickle Cell Anemia

Authors: E. A. Alaswad, H. M. Choudhry, F. Z. Filimban

Abstract:

Sickle Cell Anemia (SCA) is one type of blood diseases related to autosomal disorder. The sickle shaped red blood cells are the main cause of many problems in the blood vessels and capillaries. Aerva Javanica (J) and Ficus Palmata (P) are medicinal plants that have many popular uses and have been proved their efficacy. The aim of this study was to assess the antioxidants activity and the antisickling effect of J and P extractions. The period of this study, air-dried leaves of J, and P plants were ground and the active components were extracted by maceration in water (W) and methanol (M) as solvents. The antioxidants activity of JW, PW, JM, and PM were assessed by way of the radical scavenging method using 2,2-diphenyl-1-picrylhydrazyl (DPPH). To determine the antisickling effect of J and P extracts. 20 samples were collected from sickle cell anemia patients. Different concentrations of J and P extracts (200 and 110 μg/mL) were added on the sample and incubated. A drop of each sample was examined with light microscope. Normal and sickled RBCs were calculated and expressed as the percent of sickling. The stabilization effect of the extracts was measured by the osmotic fragility test for erythrocytes. The finding suggests as estimated by DPPH method, all the extracts showed an antioxidant activity with a significant inhibition of the DPPH radicals. PM has the least IC50% with 71.49 μg/ml while JM was the most with 408.49 μg/ml. Sickle cells treated with extracts at different concentrations significantly reduced the percentage of sickling compering to control samples. However, JM 200 μg/mL give the highest anti-sickling affect with 17.4% of sickling compared to control 67.5 of sickling while PM at 200 μg/mL showed the highest membrane cell stability. In a conclusion, the results showed that J and P extracts have antisickling effects. Therefore, the Aerva javanica and Ficus palmata may have a role in SCA management and a good impact on the patient's lives.

Keywords: Aerva javanica, antioxidant, antisickling, Ficus palmata, sickle cell anemia

Procedia PDF Downloads 146