Search results for: facility performance evaluation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17968

Search results for: facility performance evaluation

2518 Critically Sampled Hybrid Trigonometry Generalized Discrete Fourier Transform for Multistandard Receiver Platform

Authors: Temidayo Otunniyi

Abstract:

This paper presents a low computational channelization algorithm for the multi-standards platform using poly phase implementation of a critically sampled hybrid Trigonometry generalized Discrete Fourier Transform, (HGDFT). An HGDFT channelization algorithm exploits the orthogonality of two trigonometry Fourier functions, together with the properties of Quadrature Mirror Filter Bank (QMFB) and Exponential Modulated filter Bank (EMFB), respectively. HGDFT shows improvement in its implementation in terms of high reconfigurability, lower filter length, parallelism, and medium computational activities. Type 1 and type 111 poly phase structures are derived for real-valued HGDFT modulation. The design specifications are decimated critically and over-sampled for both single and multi standards receiver platforms. Evaluating the performance of oversampled single standard receiver channels, the HGDFT algorithm achieved 40% complexity reduction, compared to 34% and 38% reduction in the Discrete Fourier Transform (DFT) and tree quadrature mirror filter (TQMF) algorithm. The parallel generalized discrete Fourier transform (PGDFT) and recombined generalized discrete Fourier transform (RGDFT) had 41% complexity reduction and HGDFT had a 46% reduction in oversampling multi-standards mode. While in the critically sampled multi-standard receiver channels, HGDFT had complexity reduction of 70% while both PGDFT and RGDFT had a 34% reduction.

Keywords: software defined radio, channelization, critical sample rate, over-sample rate

Procedia PDF Downloads 114
2517 On the Effect of Carbon on the Efficiency of Titanium as a Hydrogen Storage Material

Authors: Ghazi R. Reda Mahmoud Reda

Abstract:

Among the metal that forms hydride´s, Mg and Ti are known as the most lightweight materials; however, they are covered with a passive layer of oxides and hydroxides and require activation treatment under high temperature ( > 300 C ) and hydrogen pressure ( > 3 MPa) before being used for storage and transport applications. It is well known that small graphite addition to Ti or Mg, lead to a dramatic change in the kinetics of mechanically induced hydrogen sorption ( uptake) and significantly stimulate the Ti-Hydrogen interaction. Many explanations were given by different authors to explain the effect of graphite addition on the performance of Ti as material for hydrogen storage. Not only graphite but also the addition of a polycyclic aromatic compound will also improve the hydrogen absorption kinetics. It will be shown that the function of carbon addition is two-fold. First carbon acts as a vacuum cleaner, which scavenges out all the interstitial oxygen that can poison or slow down hydrogen absorption. It is also important to note that oxygen favors the chemisorption of hydrogen, which is not desirable for hydrogen storage. Second, during scavenging of the interstitial oxygen, the carbon reacts with oxygen in the nano and microchannel through a highly exothermic reaction to produce carbon dioxide and monoxide which provide the necessary heat for activation and thus in the presence of carbon lower heat of activation for hydrogen absorption which is observed experimentally. Furthermore, the product of the reaction of hydrogen with the carbon oxide will produce water which due to ball milling hydrolyze to produce the linear H5O2 + this will reconstruct the primary structure of the nanocarbon to form secondary structure, where the primary structure (a sheet of carbon) are connected through hydrogen bonding. It is the space between these sheets where physisorption or defect mediated sorption occurs.

Keywords: metal forming hydrides, polar molecule impurities, titanium, phase diagram, hydrogen absorption

Procedia PDF Downloads 344
2516 Chinese Speakers’ Language Attitudes Towards English Accents: Comparing Mainland and Hong Kong English Major Students’ Accent Preferences in ELF Communication

Authors: Jiaqi XU, Qingru Sun

Abstract:

Accent plays a crucial role in second language (L2) learners’ performance in the schooling context in the era of globalization, where English is adopted as a lingua franca (ELF). Previous studies found that Chinese mainland students prefer American English accents, whereas the young generations in Hong Kong prefer British accents. However, these studies neglect the non-native accents of English and fail to elaborate much about why the L2 learners differ in accent preferences between the two regions. Therefore, this research aims to bridge the research gap by 1) including both native and non-native varieties of English accents: American accent, British accent, Chinese Mandarin English accent, and Hong Kong English accent; and 2) uncovering and comparing the deeper reasons for the similar or/and different accent preferences between the Chinese mainland and Hong Kong speakers. This research designed a questionnaire including objective and subjective questions to investigate the students’ accent inclinations and the attitudes and reasons behind their linguistic choices. The questionnaire was distributed to eight participants (4 Chinese mainland students and 4 Hong Kong students) who were postgraduate students at a Hong Kong university. Based on the data collection, this research finds out one similarity and two differences between the Chinese mainland and Hong Kong students’ attitudes. The theories of identity construction and standard language ideology are further applied to analyze the reasons behind the similarities and differences and to evaluate how language attitudes intertwine with their identity construction and language ideology.

Keywords: accent, language attitudes, identity construction, language ideology, ELF communication

Procedia PDF Downloads 146
2515 The Influence of Training and Competition on Cortisol Levels and Sleep in Elite Female Athletes

Authors: Shannon O’Donnell, Matthew Driller, Gregory Jacobson, Steve Bird

Abstract:

Stress hormone levels in a competition vs. training setting are yet to be evaluated in elite female athletes. The effect that these levels of stress have on subsequent sleep quality and quantity is also yet to be investigated. The aim of the current study was to evaluate different psychophysiological stress markers in competition and training environments and the subsequent effect on sleep indices in an elite female athlete population. The study involved 10 elite female netball athletes (mean ± SD; age = 23 ± 6 yrs) providing multiple salivary hormone measures and having their sleep monitored on two occasions; a match day, and a training day. The training and match were performed at the same time of day and were matched for intensity and duration. Saliva samples were collected immediately pre (5:00 pm) and post session (7:15 pm), and at 10:00 pm and were analysed for cortisol concentrations. Sleep monitoring was performed using wrist actigraphy to assess total sleep time (TST), sleep efficiency (SE%) and sleep latency (SL). Cortisol levels were significantly higher (p < 0.01) immediately post the match vs post training (mean ± SD; 0.925 ± 0.341 μg/dL and 0.239 ± 0.284 μg/dL, respectively) and at 10:00pm (0.143 ± 0.085 μg/dL and 0.072 ± 0.064 μg/dL, respectively, p < 0.01). The difference between trials was associated with a very large effect (ES: 2.23) immediately post (7:15 pm) and a large effect (ES: 1.02) at 10:00 pm. There was a significant reduction in TST (mean ± SD; -117.9 ± 111.9 minutes, p < 0.01, ES: -1.89) and SE% (-7.7 ± 8.5%, p < 0.02, ES: -0.79) on the night following the netball match compared to the training session. Although not significant (p > 0.05), there was an increase in SL following the netball match v the training session (67.0 ± 51.9 minutes and 38.5 ± 29.3 minutes, respectively), which was associated with a moderate effect (ES: 0.80). The current study reports that cortisol levels are significantly higher and subsequent sleep quantity and quality is significantly reduced in elite female athletes following a match compared to a training session.

Keywords: cortisol, netball, performance, recovery

Procedia PDF Downloads 243
2514 Probabilistic Study of Impact Threat to Civil Aircraft and Realistic Impact Energy

Authors: Ye Zhang, Chuanjun Liu

Abstract:

In-service aircraft is exposed to different types of threaten, e.g. bird strike, ground vehicle impact, and run-way debris, or even lightning strike, etc. To satisfy the aircraft damage tolerance design requirements, the designer has to understand the threatening level for different types of the aircraft structures, either metallic or composite. Exposing to low-velocity impacts may produce very serious internal damages such as delaminations and matrix cracks without leaving visible mark onto the impacted surfaces for composite structures. This internal damage can cause significant reduction in the load carrying capacity of structures. The semi-probabilistic method provides a practical and proper approximation to establish the impact-threat based energy cut-off level for the damage tolerance evaluation of the aircraft components. Thus, the probabilistic distribution of impact threat and the realistic impact energy level cut-offs are the essential establishments required for the certification of aircraft composite structures. A new survey of impact threat to civil aircraft in-service has recently been carried out based on field records concerning around 500 civil aircrafts (mainly single aisles) and more than 4.8 million flight hours. In total 1,006 damages caused by low-velocity impact events had been screened out from more than 8,000 records including impact dents, scratches, corrosions, delaminations, cracks etc. The impact threat dependency on the location of the aircraft structures and structural configuration was analyzed. Although the survey was mainly focusing on the metallic structures, the resulting low-energy impact data are believed likely representative to general civil aircraft, since the service environments and the maintenance operations are independent of the materials of the structures. The probability of impact damage occurrence (Po) and impact energy exceedance (Pe) are the two key parameters for describing the statistic distribution of impact threat. With the impact damage events from the survey, Po can be estimated as 2.1x10-4 per flight hour. Concerning the calculation of Pe, a numerical model was developed using the commercial FEA software ABAQUS to backward estimate the impact energy based on the visible damage characteristics. The relationship between the visible dent depth and impact energy was established and validated by drop-weight impact experiments. Based on survey results, Pe was calculated and assumed having a log-linear relationship versus the impact energy. As the product of two aforementioned probabilities, Po and Pe, it is reasonable and conservative to assume Pa=PoxPe=10-5, which indicates that the low-velocity impact events are similarly likely as the Limit Load events. Combing Pa with two probabilities Po and Pe obtained based on the field survey, the cutoff level of realistic impact energy was estimated and valued as 34 J. In summary, a new survey was recently done on field records of civil aircraft to investigate the probabilistic distribution of impact threat. Based on the data, two probabilities, Po and Pe, were obtained. Considering a conservative assumption of Pa, the cutoff energy level for the realistic impact energy has been determined, which provides potential applicability in damage tolerance certification of future civil aircraft.

Keywords: composite structure, damage tolerance, impact threat, probabilistic

Procedia PDF Downloads 295
2513 Prediction of Endotracheal Tube Size in Children by Predicting Subglottic Diameter Using Ultrasonographic Measurement versus Traditional Formulas

Authors: Parul Jindal, Shubhi Singh, Priya Ramakrishnan, Shailender Raghuvanshi

Abstract:

Background: Knowledge of the influence of the age of the child on laryngeal dimensions is essential for all practitioners who are dealing with paediatric airway. Choosing the correct endotracheal tube (ETT) size is a crucial step in pediatric patients because a large-sized tube may cause complications like post-extubation stridor and subglottic stenosis. On the other hand with a smaller tube, there will be increased gas flow resistance, aspiration risk, poor ventilation, inaccurate monitoring of end-tidal gases and reintubation may also be required with a different size of the tracheal tube. Recent advancement in ultrasonography (USG) techniques should now allow for accurate and descriptive evaluation of pediatric airway. Aims and objectives: This study was planned to determine the accuracy of Ultrasonography (USG) to assess the appropriate ETT size and compare it with physical indices based formulae. Methods: After obtaining approval from Institute’s Ethical and Research committee, and parental written and informed consent, the study was conducted on 100 subjects of either sex between 12-60 months of age, undergoing various elective surgeries under general anesthesia requiring endotracheal intubation. The same experienced radiologist performed ultrasonography. The transverse diameter was measured at the level of cricoids cartilage by USG. After USG, general anesthesia was administered using standard techniques followed by the institute. An experienced anesthesiologist performed the endotracheal intubations with uncuffed endotracheal tube (Portex Tracheal Tube Smiths Medical India Pvt. Ltd.) with Murphy’s eye. He was unaware of the finding of the ultrasonography. The tracheal tube was considered best fit if air leak was satisfactory at 15-20 cm H₂O of airway pressure. The obtained values were compared with the values of endotracheal tube size calculated by ultrasonography, various age, height, weight-based formulas and diameter of right and left little finger. The correlation of the size of the endotracheal tube by different modalities was done and Pearson's correlation coefficient was obtained. The comparison of the mean size of the endotracheal tube by ultrasonography and by traditional formula was done by the Friedman’s test and Wilcoxon sign-rank test. Results: The predicted tube size was equal to best fit and best determined by ultrasonography (100%) followed by comparison to left little finger (98%) and right little finger (97%) and age-based formula (95%) followed by multivariate formula (83%) and body length (81%) formula. According to Pearson`s correlation, there was a moderate correlation of best fit endotracheal tube with endotracheal tube size by age-based formula (r=0.743), body length based formula (r=0.683), right little finger based formula (r=0.587), left little finger based formula (r=0.587) and multivariate formula (r=0.741). There was a strong correlation with ultrasonography (r=0.943). Ultrasonography was the most sensitive (100%) method of prediction followed by comparison to left (98%) and right (97%) little finger and age-based formula (95%), the multivariate formula had an even lesser sensitivity (83%) whereas body length based formula was least sensitive with a sensitivity of 78%. Conclusion: USG is a reliable method of estimation of subglottic diameter and for prediction of ETT size in children.

Keywords: endotracheal intubation, pediatric airway, subglottic diameter, traditional formulas, ultrasonography

Procedia PDF Downloads 225
2512 Cirrhosis Mortality Prediction as Classification using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and novel data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. To the best of our knowledge, this is the first work to apply modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia PDF Downloads 120
2511 Integrated Two Stage Processing of Biomass Conversion to Hydroxymethylfurfural Esters Using Ionic Liquid as Green Solvent and Catalyst: Synthesis of Mono Esters

Authors: Komal Kumar, Sreedevi Upadhyayula

Abstract:

In this study, a two-stage process was established for the synthesis of HMF esters using ionic liquid acid catalyst. Ionic liquid catalyst with different strength of the Bronsted acidity was prepared in the laboratory and characterized using 1H NMR, FT-IR, and 13C NMR spectroscopy. Solid acid catalyst from the ionic liquid catalyst was prepared using the immobilization method. The acidity of the synthesized acid catalyst was measured using Hammett function and titration method. Catalytic performance was evaluated for the biomass conversion to 5-hydroxymethylfurfural (5-HMF) and levulinic acid (LA) in methyl isobutyl ketone (MIBK)-water biphasic system. A good yield of 5-HMF and LA was found at the different composition of MIBK: Water. In the case of MIBK: Water ratio 10:1, good yield of 5-HMF was observed at ambient temperature 150˚C. Upgrading of 5-HMF into monoesters from the reaction of 5-HMF and reactants using biomass-derived monoacid were performed. Ionic liquid catalyst with -SO₃H functional group was found to be best efficient in comparative of a solid acid catalyst for the esterification reaction and biomass conversion. A good yield of 5-HMF esters with high 5-HMF conversion was found to be at 105˚C using the best active catalyst. In this process, process A was the hydrothermal conversion of cellulose and monomer into 5-HMF and LA using acid catalyst. And the process B was the esterification followed by using similar acid catalyst. All monoesters of 5-HMF synthesized here can be used in chemical, cross linker for adhesive or coatings and pharmaceutical industry. A theoretical density functional theory (DFT) study for the optimization of the ionic liquid structure was performed using the Gaussian 09 program to find out the minimum energy configuration of ionic liquid catalyst.

Keywords: biomass conversion, 5-HMF, Ionic liquid, HMF ester

Procedia PDF Downloads 234
2510 Optimization and Energy Management of Hybrid Standalone Energy System

Authors: T. M. Tawfik, M. A. Badr, E. Y. El-Kady, O. E. Abdellatif

Abstract:

Electric power shortage is a serious problem in remote rural communities in Egypt. Over the past few years, electrification of remote communities including efficient on-site energy resources utilization has achieved high progress. Remote communities usually fed from diesel generator (DG) networks because they need reliable energy and cheap fresh water. The main objective of this paper is to design an optimal economic power supply from hybrid standalone energy system (HSES) as alternative energy source. It covers energy requirements for reverse osmosis desalination unit (DU) located in National Research Centre farm in Noubarya, Egypt. The proposed system consists of PV panels, Wind Turbines (WT), Batteries, and DG as a backup for supplying DU load of 105.6 KWh/day rated power with 6.6 kW peak load operating 16 hours a day. Optimization of HSES objective is selecting the suitable size of each of the system components and control strategy that provide reliable, efficient, and cost-effective system using net present cost (NPC) as a criterion. The harmonization of different energy sources, energy storage, and load requirements are a difficult and challenging task. Thus, the performance of various available configurations is investigated economically and technically using iHOGA software that is based on genetic algorithm (GA). The achieved optimum configuration is further modified through optimizing the energy extracted from renewable sources. Effective minimization of energy charging the battery ensures that most of the generated energy directly supplies the demand, increasing the utilization of the generated energy.

Keywords: energy management, hybrid system, renewable energy, remote area, optimization

Procedia PDF Downloads 187
2509 Utilization of Manila Clam Shells (Venerupis Philippinarum) and Raffia Palm Fiber (Raphia Farinifera) as an Additive in Producing Concrete Roof Tiles

Authors: Sofina Faith C. Navarro, Luke V. Subala, Rica H. Gatus, Alfonzo Ramon DG. Burguete

Abstract:

Roof tiles, as integral components of buildings, play a crucial role in protecting structures from many things. The study focuses on the production of sustainable roof tiles that address the waste disposal challenges associated with Manila clam shells and mitigate the environmental impact of conventional roof tile materials. Various concentrations of roof tiles are developed, incorporating different proportions of powdered clam shell that contains calcium carbonate and shredded raffia palm fiber. Subsequently, the roof tiles are cast using standard methods and transported to the University of the Philippines Institute of Civil Engineering (UP-ICE) for flexural strength testing. In conclusion, the research aimed to assess the flexural durability of concrete roof tiles with varying concentrations of Raffia Palm Fiber and Manila Clam Shells additives. The findings indicate notable differences in maximum load capacities among the specimens, with C3.1 emerging as the concentration with the highest load-bearing capacity at 313.59729 N. This concentration, with a flexural strength of 2.15214, is identified as the most durable option, with a slightly heavier weight of 1.10 kg. On the other hand, C2.2, with a flexural strength of 0.366 and a weight of 0.80 kg, is highlighted for its impressive durability performance while maintaining a lighter composition. Therefore, for the production of concrete roof tile, C3.1 is recommended for optimal durability, while C2.2 is suggested as a preferable option considering both durability and lightweight characteristics.

Keywords: raffia palm fiber, flexural strength, lightweightness, Manila Clam Shells

Procedia PDF Downloads 48
2508 Utilization of Manila Clam Shells (Venerupis Philippinarum) and Raffia Palm Fiber (Raphia Farinifera) as an Additive in Producing Concrete Roof Tiles

Authors: Alfonzo Ramon Burguete, Rica Gatus, Sofina Faith Navarro, Luke Subala

Abstract:

Roof tiles, as integral components of buildings, play a crucial role in protecting structures from many things. The study focuses on the production of sustainable roof tiles that address the waste disposal challenges associated with Manila clam shells and mitigate the environmental impact of conventional roof tile materials. Various concentrations of roof tiles are developed, incorporating different proportions of powdered clam shell that contains calcium carbonate and shredded raffia palm fiber. Subsequently, the roof tiles are cast using standard methods and transported to the University of the Philippines Institute of Civil Engineering (UP-ICE) for flexural strength testing. In conclusion, the research aimed to assess the flexural durability of concrete roof tiles with varying concentrations of Raffia Palm Fiber and Manila Clam Shells additives. The findings indicate notable differences in maximum load capacities among the specimens, with C3.1 emerging as the concentration with the highest load-bearing capacity at 313.59729 N. This concentration, with a flexural strength of 2.15214, is identified as the most durable option, with a slightly heavier weight of 1.10 kg. On the other hand, C2.2, with a flexural strength of 0.366 and a weight of 0.80 kg, is highlighted for its impressive durability performance while maintaining a lighter composition. Therefore, for the production of concrete roof tile C3.1 is recommended for optimal durability, while C2.2 is suggested as a preferable option considering both durability and lightweight characteristics.

Keywords: manila clam shells, raffia palm fiber, flexural strength, lightweightness

Procedia PDF Downloads 43
2507 Dueling Burnout: The Dual Role Nurse

Authors: Melissa Dorsey

Abstract:

Moral distress and compassion fatigue plague nurses in the Cardiothoracic Intensive Care Unit (CTICU) and cause an unnecessary level of turnover. Dueling Burnout describes an initiative that was implemented in the CTICU to reduce the level of burnout the nurses endure by encouraging dual roles with collaborating departments. Purpose: Critical care nurses are plagued by burnout, moral distress, and compassion fatigue due to the intensity of care provided. The purpose of the dual role program was to decrease these issues by providing relief from the intensity of the critical care environment while maintaining full-time employment. Relevance/Significance: Burnout, moral distress, and compassion fatigue are leading causes of Cardiothoracic Critical Care (CTCU) turnover. A contributing factor to burnout is the workload related to serving as a preceptor for a constant influx of new nurses (RN). As a result of these factors, the CTICU averages 17% nursing turnover/year. The cost, unit disruption, and, most importantly, distress of the clinical nurses required an innovative approach to create an improved work environment and experience. Strategies/Implementation/Methods: In May 2018, a dual role pilot was initiated for nurses. The dual role constitutes .6 full-time equivalent hours (FTE) worked in CTICU in combination with .3 FTE worked in the Emergency Department (ED). ED nurses who expressed an interest in cross-training to CTICU were also offered the dual role opportunity. The initial hypothesis was that full-time employees would benefit from a change in clinical setting leading to increased engagement and job satisfaction. The dual role also presents an opportunity for professional development through the expansion of clinical skills in another specialty. Success of the pilot led to extending the dual role to areas beyond the ED. Evaluation/Outcomes/Results: The number of dual role clinical nurses has grown to 22. From the dual role cohort, only one has transferred out of CTICU. This is a 5% turnover rate for this group of nurses as compared to the average turnover rate of 17%. A role satisfaction survey conducted with the dual role cohort found that because of working in a dual role, 76.5% decreased their intent to leave, 100% decreased their level of burnout, and 100% reported an increase in overall job satisfaction. Nurses reported the ability to develop skills that are transferable between departments. Respondents emphasized the appreciation gained from working in multiple environments; the dual role served to transform their care. Conclusions/Implications: Dual role is an effective strategy to retain experienced nurses, decrease burnout and turnover, improve collaboration, and provide flexibility to meet staffing needs. The dual role offers RNs an expansion of skills, relief from high acuity and orientee demands, while improving job satisfaction.

Keywords: nursing retention, burnout, pandemic, strategic staffing, leadership

Procedia PDF Downloads 167
2506 Aberrant Acetylation/Methylation of Homeobox (HOX) Family Genes in Cumulus Cells of Infertile Women with Polycystic Ovary Syndrome (PCOS)

Authors: P. Asiabi, M. Shahhoseini, R. Favaedi, F. Hassani, N. Nassiri, B. Movaghar, L. Karimian, P. Eftekhariyazdi

Abstract:

Introduction: Polycystic Ovary Syndrome is a common gynecologic disorder. Many factors including environment, metabolism, hormones and genetics are involved in etiopathogenesis of PCOS. Of genes that have altered expression in human reproductive system disorders are HOX family genes which act as transcription factors in regulation of cell proliferation, differentiation, adhesion and migration. Since recent evidences consider epigenetic factors as causative mechanisms of PCOS, evaluation of association between known epigenetic marks of acetylation/methylation of histone 3 (H3K9ac/me) with regulatory regions of these genes can represent better insight about PCOS. In the current study, cumulus cells (CCs) which have critical roles during folliculogenesis, oocyte maturation, ovulation and fertilization were aimed to monitor epigenetic alterations of HOX genes. Material and methods: CCs were collected from 20 PCOS patients and 20 fertile women (18-36 year) with male infertility problems referred to the Royan Institute to have ICSI under GnRH antagonist protocol. Informed consents were obtained from the participants. Thirty six hours after hCG injection, ovaries were punctured and cumulus oocyte complexes were dissected. Soluble chromatin were extracted from CCs and Chromatin Immune precipitation (ChIP) coupled with Real Time PCR was performed to quantify the epigenetic marks of histone H3K9 acetylation/methylation (H3K9ac/me) on regulatory regions of 15 members of HOX genes from A-D subfamily. Results: Obtained data showed significant increase of H3K9ac epigenetic mark on regulatory regions of HOXA1, HOXB2, HOXC4, HOXD1, HOXD3 and HOXD4 (P < 0.01) and HOXC5 (P < 0.05) and also significant decrease of H3K9ac into regulatory regions of HOXA2, HOXA4, HOXA5, HOXB1 and HOXB5 (P < 0.01) and HOXB3 (P<0.05) in PCOS patients vs. control group. On the other side, there was a significant decrease in incorporation of H3K9me level on regulatory region of HOXA2, HOXA3, HOXA4, HOXA5, HOXB3 and HOXC4 (P≤0.01) and HOXB5 (P < 0.05) in PCOS patients vs. control group. This epigenetic mark (H3K9me2) has significant increase on regulatory region of HOXB1, HOXB2, HOXC5, HOXD1, HOXD3 and HOXD4 (P ≤ 0.01) and HOXB4 (P < 0.05) in patients vs. control group. There were no significant changes in acetylation/methylation levels of H3K9 on regulatory regions of the other studied genes. Conclusion: Current study suggests that epigenetic alterations of HOX genes can be correlated with PCOS and consequently female infertility. This finding might offer additional definitions of PCOS, and eventually provides insight for novel treatments with epidrugs for this disease.

Keywords: epigenetic, HOX genes, PCOS, female infertility

Procedia PDF Downloads 305
2505 Combining Patients Pain Scores Reports with Functionality Scales in Chronic Low Back Pain Patients

Authors: Ivana Knezevic, Kenneth D. Candido, N. Nick Knezevic

Abstract:

Background: While pain intensity scales remain generally accepted assessment tool, and the numeric pain rating score is highly subjective, we nevertheless rely on them to make a judgment about treatment effects. Misinterpretation of pain can lead practitioners to underestimate or overestimate the patient’s medical condition. The purpose of this study was to analyze how the numeric rating pain scores given by patients with low back pain correlate with their functional activity levels. Methods: We included 100 consecutive patients with radicular low back pain (LBP) after the Institutional Review Board (IRB) approval. Pain scores, numeric rating scale (NRS) responses at rest and in the movement,Oswestry Disability Index (ODI) questionnaire answers were collected 10 times through 12 months. The ODI questionnaire is targeting a patient’s activities and physical limitations as well as a patient’s ability to manage stationary everyday duties. Statistical analysis was performed by using SPSS Software version 20. Results: The average duration of LBP was 14±22 months at the beginning of the study. All patients included in the study were between 24 and 78 years old (average 48.85±14); 56% women and 44% men. Differences between ODI and pain scores in the range from -10% to +10% were considered “normal”. Discrepancies in pain scores were graded as mild between -30% and -11% or +11% and +30%; moderate between -50% and -31% and +31% and +50% and severe if differences were more than -50% or +50%. Our data showed that pain scores at rest correlate well with ODI in 65% of patients. In 30% of patients mild discrepancies were present (negative in 21% and positive in 9%), 4% of patients had moderate and 1% severe discrepancies. “Negative discrepancy” means that patients graded their pain scores much higher than their functional ability, and most likely exaggerated their pain. “Positive discrepancy” means that patients graded their pain scores much lower than their functional ability, and most likely underrated their pain. Comparisons between ODI and pain scores during movement showed normal correlation in only 39% of patients. Mild discrepancies were present in 42% (negative in 39% and positive in 3%); moderate in 14% (all negative), and severe in 5% (all negative) of patients. A 58% unknowingly exaggerated their pain during movement. Inconsistencies were equal in male and female patients (p=0.606 and p=0.928).Our results showed that there was a negative correlation between patients’ satisfaction and the degree of reporting pain inconsistency. Furthermore, patients talking opioids showed more discrepancies in reporting pain intensity scores than did patients taking non-opioid analgesics or not taking medications for LBP (p=0.038). There was a highly statistically significant correlation between morphine equivalents doses and the level of discrepancy (p<0.0001). Conclusion: We have put emphasis on the patient education in pain evaluation as a vital step in accurate pain level reporting. We have showed a direct correlation with patients’ satisfaction. Furthermore, we must identify other parameters in defining our patients’ chronic pain conditions, such as functionality scales, quality of life questionnaires, etc., and should move away from an overly simplistic subjective rating scale.

Keywords: pain score, functionality scales, low back pain, lumbar

Procedia PDF Downloads 218
2504 Exploring Valproic Acid (VPA) Analogues Interactions with HDAC8 Involved in VPA Mediated Teratogenicity: A Toxicoinformatics Analysis

Authors: Sakshi Piplani, Ajit Kumar

Abstract:

Valproic acid (VPA) is the first synthetic therapeutic agent used to treat epileptic disorders, which account for affecting nearly 1% world population. Teratogenicity caused by VPA has prompted the search for next generation drug with better efficacy and lower side effects. Recent studies have posed HDAC8 as direct target of VPA that causes the teratogenic effect in foetus. We have employed molecular dynamics (MD) and docking simulations to understand the binding mode of VPA and their analogues onto HDAC8. A total of twenty 3D-structures of human HDAC8 isoforms were selected using BLAST-P search against PDB. Multiple sequence alignment was carried out using ClustalW and PDB-3F07 having least missing and mutated regions was selected for study. The missing residues of loop region were constructed using MODELLER and energy was minimized. A set of 216 structural analogues (>90% identity) of VPA were obtained from Pubchem and ZINC database and their energy was optimized with Chemsketch software using 3-D CHARMM-type force field. Four major neurotransmitters (GABAt, SSADH, α-KGDH, GAD) involved in anticonvulsant activity were docked with VPA and its analogues. Out of 216 analogues, 75 were selected on the basis of lower binding energy and inhibition constant as compared to VPA, thus predicted to have anti-convulsant activity. Selected hHDAC8 structure was then subjected to MD Simulation using licenced version YASARA with AMBER99SB force field. The structure was solvated in rectangular box of TIP3P. The simulation was carried out with periodic boundary conditions and electrostatic interactions and treated with Particle mesh Ewald algorithm. pH of system was set to 7.4, temperature 323K and pressure 1atm respectively. Simulation snapshots were stored every 25ps. The MD simulation was carried out for 20ns and pdb file of HDAC8 structure was saved every 2ns. The structures were analysed using castP and UCSF Chimera and most stabilized structure (20ns) was used for docking study. Molecular docking of 75 selected VPA-analogues with PDB-3F07 was performed using AUTODOCK4.2.6. Lamarckian Genetic Algorithm was used to generate conformations of docked ligand and structure. The docking study revealed that VPA and its analogues have more affinity towards ‘hydrophobic active site channel’, due to its hydrophobic properties and allows VPA and their analogues to take part in van der Waal interactions with TYR24, HIS42, VAL41, TYR20, SER138, TRP137 while TRP137 and SER138 showed hydrogen bonding interaction with VPA-analogues. 14 analogues showed better binding affinity than VPA. ADMET SAR server was used to predict the ADMET properties of selected VPA analogues for predicting their druggability. On the basis of ADMET screening, 09 molecules were selected and are being used for in-vivo evaluation using Danio rerio model.

Keywords: HDAC8, docking, molecular dynamics simulation, valproic acid

Procedia PDF Downloads 229
2503 Microwave Heating and Catalytic Activity of Iron/Carbon Materials for H₂ Production from the Decomposition of Plastic Wastes

Authors: Peng Zhang, Cai Liang

Abstract:

The non-biodegradable plastic wastes have posed severe environmental and ecological contaminations. Numerous technologies, such as pyrolysis, incineration, and landfilling, have already been employed for the treatment of plastic waste. Compared with conventional methods, microwave has displayed unique advantages in the rapid production of hydrogen from plastic wastes. Understanding the interaction between microwave radiation and materials would promote the optimization of several parameters for the microwave reaction system. In this work, various carbon materials have been investigated to reveal microwave heating performance and the ensuing catalytic activity. Results showed that the diversity in the heating characteristic was mainly due to the dielectric properties and the individual microstructures. Furthermore, the gaps and steps among the surface of carbon materials would lead to the distortion of the electromagnetic field, which correspondingly induced plasma discharging. The intensity and location of local plasma were also studied. For high-yield H₂ production, iron nanoparticles were selected as the active sites, and a series of iron/carbon bifunctional catalysts were synthesized. Apart from the high catalytic activity, the iron particles in nano-size close to the microwave skin depth would transfer microwave irradiation to the heat, intensifying the decomposition of plastics. Under microwave radiation, iron is supported on activated carbon material with 10wt.% loading exhibited the best catalytic activity for H₂ production. Specifically, the plastics were rapidly heated up and subsequently converted into H₂ with a hydrogen efficiency of 85%. This work demonstrated a deep understanding of microwave reaction systems and provided the optimization for plastic treatment.

Keywords: plastic waste, recycling, hydrogen, microwave

Procedia PDF Downloads 53
2502 Combining Diffusion Maps and Diffusion Models for Enhanced Data Analysis

Authors: Meng Su

Abstract:

High-dimensional data analysis often presents challenges in capturing the complex, nonlinear relationships and manifold structures inherent to the data. This article presents a novel approach that leverages the strengths of two powerful techniques, Diffusion Maps and Diffusion Probabilistic Models (DPMs), to address these challenges. By integrating the dimensionality reduction capability of Diffusion Maps with the data modeling ability of DPMs, the proposed method aims to provide a comprehensive solution for analyzing and generating high-dimensional data. The Diffusion Map technique preserves the nonlinear relationships and manifold structure of the data by mapping it to a lower-dimensional space using the eigenvectors of the graph Laplacian matrix. Meanwhile, DPMs capture the dependencies within the data, enabling effective modeling and generation of new data points in the low-dimensional space. The generated data points can then be mapped back to the original high-dimensional space, ensuring consistency with the underlying manifold structure. Through a detailed example implementation, the article demonstrates the potential of the proposed hybrid approach to achieve more accurate and effective modeling and generation of complex, high-dimensional data. Furthermore, it discusses possible applications in various domains, such as image synthesis, time-series forecasting, and anomaly detection, and outlines future research directions for enhancing the scalability, performance, and integration with other machine learning techniques. By combining the strengths of Diffusion Maps and DPMs, this work paves the way for more advanced and robust data analysis methods.

Keywords: diffusion maps, diffusion probabilistic models (DPMs), manifold learning, high-dimensional data analysis

Procedia PDF Downloads 86
2501 Understanding Retail Benefits Trade-offs of Dynamic Expiration Dates (DED) Associated with Food Waste

Authors: Junzhang Wu, Yifeng Zou, Alessandro Manzardo, Antonio Scipioni

Abstract:

Dynamic expiration dates (DEDs) play an essential role in reducing food waste in the context of the sustainable cold chain and food system. However, it is unknown for the trades-off in retail benefits when setting an expiration date on fresh food products. This study aims to develop a multi-dimensional decision-making model that integrates DEDs with food waste based on wireless sensor network technology. The model considers the initial quality of fresh food and the change rate of food quality with the storage temperature as cross-independent variables to identify the potential impacts of food waste in retail by applying s DEDs system. The results show that retail benefits from the DEDs system depend on each scenario despite its advanced technology. In the DEDs, the storage temperature of the retail shelf leads to the food waste rate, followed by the change rate of food quality and the initial quality of food products. We found that the DEDs system could reduce food waste when food products are stored at lower temperature areas. Besides, the potential of food savings in an extended replenishment cycle is significantly more advantageous than the fixed expiration dates (FEDs). On the other hand, the information-sharing approach of the DEDs system is relatively limited in improving sustainable assessment performance of food waste in retail and even misleads consumers’ choices. The research provides a comprehensive understanding to support the techno-economic choice of the DEDs associated with food waste in retail.

Keywords: dynamic expiry dates (DEDs), food waste, retail benefits, fixed expiration dates (FEDs)

Procedia PDF Downloads 99
2500 Global Healthcare Village Based on Mobile Cloud Computing

Authors: Laleh Boroumand, Muhammad Shiraz, Abdullah Gani, Rashid Hafeez Khokhar

Abstract:

Cloud computing being the use of hardware and software that are delivered as a service over a network has its application in the area of health care. Due to the emergency cases reported in most of the medical centers, prompt for an efficient scheme to make health data available with less response time. To this end, we propose a mobile global healthcare village (MGHV) model that combines the components of three deployment model which include country, continent and global health cloud to help in solving the problem mentioned above. In the creation of continent model, two (2) data centers are created of which one is local and the other is global. The local replay the request of residence within the continent, whereas the global replay the requirements of others. With the methods adopted, there is an assurance of the availability of relevant medical data to patients, specialists, and emergency staffs regardless of locations and time. From our intensive experiment using the simulation approach, it was observed that, broker policy scheme with respect to optimized response time, yields a very good performance in terms of reduction in response time. Though, our results are comparable to others when there is an increase in the number of virtual machines (80-640 virtual machines). The proportionality in increase of response time is within 9%. The results gotten from our simulation experiments shows that utilizing MGHV leads to the reduction of health care expenditures and helps in solving the problems of unqualified medical staffs faced by both developed and developing countries.

Keywords: cloud computing (MCC), e-healthcare, availability, response time, service broker policy

Procedia PDF Downloads 364
2499 Evaluating the Use of Manned and Unmanned Aerial Vehicles in Strategic Offensive Tasks

Authors: Yildiray Korkmaz, Mehmet Aksoy

Abstract:

In today's operations, countries want to reach their aims in the shortest way due to economical, political and humanitarian aspects. The most effective way of achieving this goal is to be able to penetrate strategic targets. Strategic targets are generally located deep inside of the countries and are defended by modern and efficient surface to air missiles (SAM) platforms which are operated as integrated with Intelligence, Surveillance and Reconnaissance (ISR) systems. On the other hand, these high valued targets are buried deep underground and hardened with strong materials against attacks. Therefore, to penetrate these targets requires very detailed intelligence. This intelligence process should include a wide range that is from weaponry to threat assessment. Accordingly, the framework of the attack package will be determined. This mission package has to execute missions in a high threat environment. The way to minimize the risk which depends on loss of life is to use packages which are formed by UAVs. However, some limitations arising from the characteristics of UAVs restricts the performance of the mission package consisted of UAVs. So, the mission package should be formed with UAVs under the leadership of a fifth generation manned aircraft. Thus, we can minimize the limitations, easily penetrate in the deep inside of the enemy territory with minimum risk, make a decision according to ever-changing conditions and finally destroy the strategic targets. In this article, the strengthens and weakness aspects of UAVs are examined by SWOT analysis. And also, it revealed features of a mission package and presented as an example what kind of a mission package we should form in order to get marginal benefit and penetrate into strategic targets with the development of autonomous mission execution capability in the near future.

Keywords: UAV, autonomy, mission package, strategic attack, mission planning

Procedia PDF Downloads 533
2498 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 184
2497 Protection of Steel Bars in Reinforce Concrete with Zinc Based Coverings

Authors: Hamed Rajabzadeh Gatabi, Soroush Dastgheibifard, Mahsa Asnafi

Abstract:

There is no doubt that reinforced concrete is known as one of the most significant materials which is used in construction industry for many years. Although, some natural elements in dealing with environment can contribute to its corrosion or failure. One of which is bar or so-called reinforcement failure. So as to combat this problem, one of the oxidization prevention methods investigated was the barrier protection method implemented over the application of an organic coating, specifically fusion-bonded epoxy. In this study comparative method is prepared on two different kinds of covered bars (zinc-riches epoxy and polyamide epoxy coated bars) and also uncoated bar. With the aim of evaluate these reinforced concretes, the stickiness, toughness, thickness and corrosion performance of coatings were compared by some tools like Cu/CuSo4 electrodes, EIS and etc. Different types of concretes were exposed to the salty environment (NaCl 3.5%) and their durability was measured. As stated by the experiments in research and investigations, thick coatings (named epoxies) have acceptable stickiness and strength. Polyamide epoxy coatings stickiness to the bars was a bit better than that of zinc-rich epoxy coatings; nonetheless it was stiffer than the zinc rich epoxy coatings. Conversely, coated bars with zinc-rich epoxy showed more negative oxidization potentials, which take revenge protection of bars by zinc particles. On the whole, zinc-rich epoxy coverings is more corrosion-proof than polyamide epoxy coatings due to consuming zinc elements and some other parameters, additionally if the epoxy coatings without surface defects are applied on the rebar surface carefully, it can be said that the life of steel structures is subjected to increase dramatically.

Keywords: surface coating, epoxy polyamide, reinforce concrete bars, salty environment

Procedia PDF Downloads 271
2496 Improving Sample Analysis and Interpretation Using QIAGENs Latest Investigator STR Multiplex PCR Assays with a Novel Quality Sensor

Authors: Daniel Mueller, Melanie Breitbach, Stefan Cornelius, Sarah Pakulla-Dickel, Margaretha Koenig, Anke Prochnow, Mario Scherer

Abstract:

The European STR standard set (ESS) of loci as well as the new expanded CODIS core loci set as recommended by the CODIS Core Loci Working Group, has led to a higher standardization and harmonization in STR analysis across borders. Various multiplex PCRs assays have since been developed for the analysis of these 17 ESS or 23 CODIS expansion STR markers that all meet high technical demands. However, forensic analysts are often faced with difficult STR results and the questions thereupon. What is the reason that no peaks are visible in the electropherogram? Did the PCR fail? Was the DNA concentration too low? QIAGEN’s newest Investigator STR kits contain a novel Quality Sensor (QS) that acts as internal performance control and gives useful information for evaluating the amplification efficiency of the PCR. QS indicates if the reaction has worked in general and furthermore allows discriminating between the presence of inhibitors or DNA degradation as a cause for the typical ski slope effect observed in STR profiles of such challenging samples. This information can be used to choose the most appropriate rework strategy.Based on the latest PCR chemistry called FRM 2.0, QIAGEN now provides the next technological generation for STR analysis, the Investigator ESSplex SE QS and Investigator 24plex QS Kits. The new PCR chemistry ensures robust and fast PCR amplification with improved inhibitor resistance and easy handling for a manual or automated setup. The short cycling time of 60 min reduces the duration of the total PCR analysis to make a whole workflow analysis in one day more likely. To facilitate the interpretation of STR results a smart primer design was applied for best possible marker distribution, highest concordance rates and a robust gender typing.

Keywords: PCR, QIAGEN, quality sensor, STR

Procedia PDF Downloads 477
2495 Omni-Modeler: Dynamic Learning for Pedestrian Redetection

Authors: Michael Karnes, Alper Yilmaz

Abstract:

This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.

Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition

Procedia PDF Downloads 60
2494 Decomposition of the Discount Function Into Impatience and Uncertainty Aversion. How Neurofinance Can Help to Understand Behavioral Anomalies

Authors: Roberta Martino, Viviana Ventre

Abstract:

Intertemporal choices are choices under conditions of uncertainty in which the consequences are distributed over time. The Discounted Utility Model is the essential reference for describing the individual in the context of intertemporal choice. The model is based on the idea that the individual selects the alternative with the highest utility, which is calculated by multiplying the cardinal utility of the outcome, as if the reception were instantaneous, by the discount function that determines a decrease in the utility value according to how the actual reception of the outcome is far away from the moment the choice is made. Initially, the discount function was assumed to have an exponential trend, whose decrease over time is constant, in line with a profile of a rational investor described by classical economics. Instead, empirical evidence called for the formulation of alternative, hyperbolic models that better represented the actual actions of the investor. Attitudes that do not comply with the principles of classical rationality are termed anomalous, i.e., difficult to rationalize and describe through normative models. The development of behavioral finance, which describes investor behavior through cognitive psychology, has shown that deviations from rationality are due to the limited rationality condition of human beings. What this means is that when a choice is made in a very difficult and information-rich environment, the brain does a compromise job between the cognitive effort required and the selection of an alternative. Moreover, the evaluation and selection phase of the alternative, the collection and processing of information, are dynamics conditioned by systematic distortions of the decision-making process that are the behavioral biases involving the individual's emotional and cognitive system. In this paper we present an original decomposition of the discount function to investigate the psychological principles of hyperbolic discounting. It is possible to decompose the curve into two components: the first component is responsible for the smaller decrease in the outcome as time increases and is related to the individual's impatience; the second component relates to the change in the direction of the tangent vector to the curve and indicates how much the individual perceives the indeterminacy of the future indicating his or her aversion to uncertainty. This decomposition allows interesting conclusions to be drawn with respect to the concept of impatience and the emotional drives involved in decision-making. The contribution that neuroscience can make to decision theory and inter-temporal choice theory is vast as it would allow the description of the decision-making process as the relationship between the individual's emotional and cognitive factors. Neurofinance is a discipline that uses a multidisciplinary approach to investigate how the brain influences decision-making. Indeed, considering that the decision-making process is linked to the activity of the prefrontal cortex and amygdala, neurofinance can help determine the extent to which abnormal attitudes respect the principles of rationality.

Keywords: impatience, intertemporal choice, neurofinance, rationality, uncertainty

Procedia PDF Downloads 112
2493 The Effects of an Exercise Program Integrated with the Transtheoretical Model on Pain and Trunk Muscle Endurance of Rice Farmers with Chronic Low Back Pain

Authors: Thanakorn Thanawat, Nomjit Nualnetr

Abstract:

Background and Purpose: In Thailand, rice farmers have the most prevalence of low back pain when compared with other manual workers. Exercises have been suggested to be a principal part of treatment programs for low back pain. However, the programs should be tailored to an individual’s readiness to change categorized by a behavioral approach. This study aimed to evaluate a difference between the responses of rice farmers with chronic low back pain who received an exercise program integrated with the transtheoretical model of behavior change (TTM) and those of the comparison group regarding severity of pain and trunk muscle endurance. Materials and Methods: An 8-week exercise program was conducted to rice farmers with chronic low back pain who were randomized to either the TTM (n=62, 52 woman and 10 men, mean age ± SD 45.0±5.4 years) or non-TTM (n=64, 53 woman and 11 men, mean age ± SD 44.7±5.4 years) groups. All participants were tested for their severity of pain and trunk (abdominal and back) muscle endurance at baseline (week 0) and immediately after termination of the program (week 8). Data were analysed by using descriptive statistics and student’s t-tests. The results revealed that both TTM and non-TTM groups could decrease their severity of pain and improve trunk muscle endurance after participating in the 8-week exercise program. When compared with the non-TTM group, however, the TTM showed a significantly greater increase in abdominal muscle endurance than did the non-TTM (P=0.004, 95% CI -12.4 to -2.3). Conclusions and Clinical Relevance: An exercise program integrated with the TTM could provide benefits to rice farmers with chronic low back pain. Future studies with a longitudinal design and more outcome measures such as physical performance and quality of life are suggested to reveal further benefits of the program.

Keywords: chronic low back pain, transtheoretical model, rice farmers, exercise program

Procedia PDF Downloads 371
2492 Unlocking New Room of Production in Brown Field; ‎Integration of Geological Data Conditioned 3D Reservoir ‎Modelling of Lower Senonian Matulla Formation, RAS ‎Budran Field, East Central Gulf of Suez, Egypt

Authors: Nader Mohamed

Abstract:

The Late Cretaceous deposits are well developed through-out Egypt. This is due to a ‎transgression phase associated with the subsidence caused by the neo-Tethyan rift event that ‎took place across the northern margin of Africa, resulting in a period of dominantly marine ‎deposits in the Gulf of Suez. The Late Cretaceous Nezzazat Group represents the Cenomanian, ‎Turonian and clastic sediments of the Lower Senonian. The Nezzazat Group has been divided ‎into four formations namely, from base to top, the Raha Formation, the Abu Qada Formation, ‎the Wata Formation and the Matulla Formation. The Cenomanian Raha and the Lower Senonian ‎Matulla formations are the most important clastic sequence in the Nezzazat Group because they ‎provide the highest net reservoir thickness and the highest net/gross ratio. This study emphasis ‎on Matulla formation located in the eastern part of the Gulf of Suez. The three stratigraphic ‎surface sections (Wadi Sudr, Wadi Matulla and Gabal Nezzazat) which represent the exposed ‎Coniacian-Santonian sediments in Sinai are used for correlating Matulla sediments of Ras ‎Budran field. Cutting description, petrographic examination, log behaviors, biostratigraphy with ‎outcrops are used to identify the reservoir characteristics, lithology, facies environment logs and ‎subdivide the Matulla formation into three units. The lower unit is believed to be the main ‎reservoir where it consists mainly of sands with shale and sandy carbonates, while the other ‎units are mainly carbonate with some streaks of shale and sand. Reservoir modeling is an ‎effective technique that assists in reservoir management as decisions concerning development ‎and depletion of hydrocarbon reserves, So It was essential to model the Matulla reservoir as ‎accurately as possible in order to better evaluate, calculate the reserves and to determine the ‎most effective way of recovering as much of the petroleum economically as possible. All ‎available data on Matulla formation are used to build the reservoir structure model, lithofacies, ‎porosity, permeability and water saturation models which are the main parameters that describe ‎the reservoirs and provide information on effective evaluation of the need to develop the oil ‎potentiality of the reservoir. This study has shown the effectiveness of; 1) the integration of ‎geological data to evaluate and subdivide Matulla formation into three units. 2) Lithology and ‎facies environment interpretation which helped in defining the nature of deposition of Matulla ‎formation. 3) The 3D reservoir modeling technology as a tool for adequate understanding of the ‎spatial distribution of property and in addition evaluating the unlocked new reservoir areas of ‎Matulla formation which have to be drilled to investigate and exploit the un-drained oil. 4) This ‎study led to adding a new room of production and additional reserves to Ras Budran field. ‎

Keywords: geology, oil and gas, geoscience, sequence stratigraphy

Procedia PDF Downloads 88
2491 Interoperability of 505th Search and Rescue Group and the 205th Tactical Helicopter Wing of the Philippine Air Force in Search and Rescue Operations: An Assessment

Authors: Ryan C. Igama

Abstract:

The complexity of disaster risk reduction management paved the way for various innovations and approaches to mitigate the loss of lives and casualties during disaster-related situations. The efficiency of doing response operations during disasters relies on the timely and organized deployment of search, rescue and retrieval teams. Indeed, the assistance provided by the search, rescue, and retrieval teams during disaster operations is a critical service needed to further minimize the loss of lives and casualties. The Armed Forces of the Philippines was mandated to provide humanitarian assistance and disaster relief operations during calamities and disasters. Thus, this study “Interoperability of 505TH Search and Rescue Group and the 205TH Tactical Helicopter Wing of the Philippine Air Force in Search and Rescue Operations: An Assessment” was intended to provide substantial information to further strengthen and promote the capabilities of search and rescue operations in the Philippines. Further, this study also aims to assess the interoperability of the 505th Search and Rescue Group of the Philippine Air Force and the 205th Tactical Helicopter Wing Philippine Air Force. This study was undertaken covering the component units in the Philippine Air Force of the Armed Forces of the Philippines – specifically the 505th SRG and the 205th THW as the involved units who also acted as the respondents of the study. The qualitative approach was the mechanism utilized in the form of focused group discussions, key informant interviews, and documentary analysis as primary means to obtain the needed data for the study. Essentially, this study was geared towards the evaluation of the effectiveness of the interoperability of the two (2) involved PAF units during search and rescue operations. Further, it also delved into the identification of the impacts, gaps, and challenges confronted regarding interoperability as to training, equipment, and coordination mechanism vis-à-vis the needed measures for improvement, respectively. The result of the study regarding the interoperability of the two (2) PAF units during search and rescue operations showed that there was a duplication in terms of functions or tasks in HADR activities, specifically during the conduct of air rescue operations in situations like calamities. In addition, it was revealed that there was a lack of equipment and training for the personnel involved in search and rescue operations which is a vital element during calamity response activities. Based on the findings of the study, it was recommended that a strategic planning workshop/activity must be conducted regarding the duties and responsibilities of the personnel involved in the search and rescue operations to address the command and control and interoperability issues of these units. Additionally, the conduct of intensive HADR-related training for the personnel involved in search and rescue operations of the two (2) PAF Units must also be conducted so they can be more proficient in their skills and sustainably increase their knowledge of search and rescue scenarios, including the capabilities of the respective units. Lastly, the updating of existing doctrines or policies must be undertaken to adapt advancement to the evolving situations in search and rescue operations.

Keywords: interoperability, search and rescue capability, humanitarian assistance, disaster response

Procedia PDF Downloads 77
2490 Integrating Knowledge Distillation of Multiple Strategies

Authors: Min Jindong, Wang Mingxia

Abstract:

With the widespread use of artificial intelligence in life, computer vision, especially deep convolutional neural network models, has developed rapidly. With the increase of the complexity of the real visual target detection task and the improvement of the recognition accuracy, the target detection network model is also very large. The huge deep neural network model is not conducive to deployment on edge devices with limited resources, and the timeliness of network model inference is poor. In this paper, knowledge distillation is used to compress the huge and complex deep neural network model, and the knowledge contained in the complex network model is comprehensively transferred to another lightweight network model. Different from traditional knowledge distillation methods, we propose a novel knowledge distillation that incorporates multi-faceted features, called M-KD. In this paper, when training and optimizing the deep neural network model for target detection, the knowledge of the soft target output of the teacher network in knowledge distillation, the relationship between the layers of the teacher network and the feature attention map of the hidden layer of the teacher network are transferred to the student network as all knowledge. in the model. At the same time, we also introduce an intermediate transition layer, that is, an intermediate guidance layer, between the teacher network and the student network to make up for the huge difference between the teacher network and the student network. Finally, this paper adds an exploration module to the traditional knowledge distillation teacher-student network model. The student network model not only inherits the knowledge of the teacher network but also explores some new knowledge and characteristics. Comprehensive experiments in this paper using different distillation parameter configurations across multiple datasets and convolutional neural network models demonstrate that our proposed new network model achieves substantial improvements in speed and accuracy performance.

Keywords: object detection, knowledge distillation, convolutional network, model compression

Procedia PDF Downloads 261
2489 Monitoring the Effect of Doxorubicin Liposomal in VX2 Tumor Using Magnetic Resonance Imaging

Authors: Ren-Jy Ben, Jo-Chi Jao, Chiu-Ya Liao, Ya-Ru Tsai, Lain-Chyr Hwang, Po-Chou Chen

Abstract:

Cancer is still one of the serious diseases threatening the lives of human beings. How to have an early diagnosis and effective treatment for tumors is a very important issue. The animal carcinoma model can provide a simulation tool for the study of pathogenesis, biological characteristics and therapeutic effects. Recently, drug delivery systems have been rapidly developed to effectively improve the therapeutic effects. Liposome plays an increasingly important role in clinical diagnosis and therapy for delivering a pharmaceutic or contrast agent to the targeted sites. Liposome can be absorbed and excreted by the human body, and is well known that no harm to the human body. This study aimed to compare the therapeutic effects between encapsulated (doxorubicin liposomal, LipoDox) and un-encapsulated (doxorubicin, Dox) anti-tumor drugs using Magnetic Resonance Imaging (MRI). Twenty-four New Zealand rabbits implanted with VX2 carcinoma at left thigh were classified into three groups: control group (untreated), Dox-treated group and LipoDox-treated group, 8 rabbits for each group. MRI scans were performed three days after tumor implantation. A 1.5T GE Signa HDxt whole body MRI scanner with a high resolution knee coil was used in this study. After a 3-plane localizer scan was performed, Three-Dimensional (3D) Fast Spin Echo (FSE) T2-Weighted Images (T2WI) was used for tumor volumetric quantification. And Two-Dimensional (2D) spoiled gradient recalled echo (SPGR) dynamic Contrast-enhanced (DCE) MRI was used for tumor perfusion evaluation. DCE-MRI was designed to acquire four baseline images, followed by contrast agent Gd-DOTA injection through the ear vein of rabbits. Afterwards, a series of 32 images were acquired to observe the signals change over time in the tumor and muscle. The MRI scanning was scheduled on a weekly basis for a period of four weeks to observe the tumor progression longitudinally. The Dox and LipoDox treatments were prescribed 3 times in the first week immediately after VX2 tumor implantation. ImageJ was used to quantitate tumor volume and time course signal enhancement on DCE images. The changes of tumor size showed that the growth of VX2 tumors was effectively inhibited for both LipoDox-treated and Dox-treated groups. Furthermore, the tumor volume of LipoDox-treated group was significantly lower than that of Dox-treated group, which implies that LipoDox has better therapeutic effect than Dox. The signal intensity of LipoDox-treated group is significantly lower than that of the other two groups, which implies that targeted therapeutic drug remained in the tumor tissue. This study provides a radiation-free and non-invasive MRI method for therapeutic monitoring of targeted liposome on an animal tumor model.

Keywords: doxorubicin, dynamic contrast-enhanced MRI, lipodox, magnetic resonance imaging, VX2 tumor model

Procedia PDF Downloads 446