Search results for: application of law
878 Dose Profiler: A Tracking Device for Online Range Monitoring in Particle Therapy
Authors: G. Battistoni, F. Collamati, E. De Lucia, R. Faccini, C. Mancini-Terracciano, M. Marafini, I. Mattei, S. Muraro, V. Patera, A. Sarti, A. Sciubba, E. Solfaroli Camillocci, M. Toppi, G. Traini, S. M. Valle, C. Voena
Abstract:
Accelerated charged particles, mainly protons and carbon ions, are presently used in Particle Therapy (PT) to treat solid tumors. The precision of PT exploiting the charged particle high localized dose deposition in tissues and biological effectiveness in killing cancer cells demands for an online dose monitoring technique, crucial to improve the quality assurance of treatments: possible patient mis-positionings and biological changes with respect to the CT scan could negatively affect the therapy outcome. In PT the beam range confined in the irradiated target can be monitored thanks to the secondary radiation produced by the interaction of the projectiles with the patient tissue. The Dose Profiler (DP) is a novel device designed to track charged secondary particles and reconstruct their longitudinal emission distribution, correlated to the Bragg peak position. The feasibility of this approach has been demonstrated by dedicated experimental measurements. The DP has been developed in the framework of the INSIDE project, MIUR, INFN and Centro Fermi, Museo Storico della Fisica e Centro Studi e Ricerche 'E. Fermi', Roma, Italy and will be tested at the Proton Therapy center of Trento (Italy) within the end of 2017. The DP combines a tracker, made of six layers of two-view scintillating fibers with square cross section (0.5 x 0.5 mm2) with two layers of two-view scintillating bars (section 12.0 x 0.6 mm2). The electronic readout is performed by silicon photomultipliers. The sensitive area of the tracking planes is 20 x 20 cm2. To optimize the detector layout, a Monte Carlo (MC) simulation based on the FLUKA code has been developed. The complete DP geometry and the track reconstruction code have been fully implemented in the MC. In this contribution, the DP hardware will be described. The expected detector performance computed using a dedicated simulation of a 220 MeV/u carbon ion beam impinging on a PMMA target will be presented, and the result will be discussed in the standard clinical application framework. A possible procedure for real-time beam range monitoring is proposed, following the expectations in actual clinical operation.Keywords: online range monitoring, particle therapy, quality assurance, tracking detector
Procedia PDF Downloads 240877 Real-Time Quantitative Polymerase Chain Reaction Assay for the Detection of microRNAs Using Bi-Directional Extension Sequences
Authors: Kyung Jin Kim, Jiwon Kwak, Jae-Hoon Lee, Soo Suk Lee
Abstract:
MicroRNAs (miRNA) are a class of endogenous, single-stranded, small, and non-protein coding RNA molecules typically 20-25 nucleotides long. They are thought to regulate the expression of other genes in a broad range by binding to 3’- untranslated regions (3’-UTRs) of specific mRNAs. The detection of miRNAs is very important for understanding of the function of these molecules and in the diagnosis of variety of human diseases. However, detection of miRNAs is very challenging because of their short length and high sequence similarities within miRNA families. So, a simple-to-use, low-cost, and highly sensitive method for the detection of miRNAs is desirable. In this study, we demonstrate a novel bi-directional extension (BDE) assay. In the first step, a specific linear RT primer is hybridized to 6-10 base pairs from the 3’-end of a target miRNA molecule and then reverse transcribed to generate a cDNA strand. After reverse transcription, the cDNA was hybridized to the 3’-end which is BDE sequence; it played role as the PCR template. The PCR template was amplified in an SYBR green-based quantitative real-time PCR. To prove the concept, we used human brain total RNA. It could be detected quantitatively in the range of seven orders of magnitude with excellent linearity and reproducibility. To evaluate the performance of BDE assay, we contrasted sensitivity and specificity of the BDE assay against a commercially available poly (A) tailing method using miRNAs for let-7e extracted from A549 human epithelial lung cancer cells. The BDE assay displayed good performance compared with a poly (A) tailing method in terms of specificity and sensitivity; the CT values differed by 2.5 and the melting curve showed a sharper than poly (A) tailing methods. We have demonstrated an innovative, cost-effective BDE assay that allows improved sensitivity and specificity in detection of miRNAs. Dynamic range of the SYBR green-based RT-qPCR for miR-145 could be represented quantitatively over a range of 7 orders of magnitude from 0.1 pg to 1.0 μg of human brain total RNA. Finally, the BDE assay for detection of miRNA species such as let-7e shows good performance compared with a poly (A) tailing method in terms of specificity and sensitivity. Thus BDE proves a simple, low cost, and highly sensitive assay for various miRNAs and should provide significant contributions in research on miRNA biology and application of disease diagnostics with miRNAs as targets.Keywords: bi-directional extension (BDE), microRNA (miRNA), poly (A) tailing assay, reverse transcription, RT-qPCR
Procedia PDF Downloads 166876 Gait Analysis in Total Knee Arthroplasty
Authors: Neeraj Vij, Christian Leber, Kenneth Schmidt
Abstract:
Introduction: Total knee arthroplasty is a common procedure. It is well known that the biomechanics of the knee do not fully return to their normal state. Motion analysis has been used to study the biomechanics of the knee after total knee arthroplasty. The purpose of this scoping review is to summarize the current use of gait analysis in total knee arthroplasty and to identify the preoperative motion analysis parameters for which a systematic review aimed at determining the reliability and validity may be warranted. Materials and Methods: This IRB-exempt scoping review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) checklist strictly. Five search engines were searched for a total of 279 articles. Articles underwent a title and abstract screening process followed by full-text screening. Included articles were placed in the following sections: the role of gait analysis as a research tool for operative decisions, other research applications for motion analysis in total knee arthroplasty, gait analysis as a tool in predicting radiologic outcomes, gait analysis as a tool in predicting clinical outcomes. Results: Eleven articles studied gait analysis as a research tool in studying operative decisions. Motion analysis is currently used to study surgical approaches, surgical techniques, and implant choice. Five articles studied other research applications for motion analysis in total knee arthroplasty. Other research applications for motion analysis currently include studying the role of the unicompartmental knee arthroplasty and novel physical therapy protocols aimed at optimizing post-operative care. Two articles studied motion analysis as a tool for predicting radiographic outcomes. Preoperative gait analysis has identified parameters than can predict postoperative tibial component migration. 15 articles studied motion analysis in conjunction with clinical scores. Conclusions: There is a broad range of applications within the research domain of total knee arthroplasty. The potential application is likely larger. However, the current literature is limited by vague definitions of ‘gait analysis’ or ‘motion analysis’ and a limited number of articles with preoperative and postoperative functional and clinical measures. Knee adduction moment, knee adduction impulse, total knee range of motion, varus angle, cadence, stride length, and velocity have the potential for integration into composite clinical scores. A systematic review aimed at determining the validity, reliability, sensitivities, and specificities of these variables is warranted.Keywords: motion analysis, joint replacement, patient-reported outcomes, knee surgery
Procedia PDF Downloads 94875 Boiler Ash as a Reducer of Formaldehyde Emission in Medium-Density Fiberboard
Authors: Alexsandro Bayestorff da Cunha, Dpebora Caline de Mello, Camila Alves Corrêa
Abstract:
In the production of fiberboards, an adhesive based on urea-formaldehyde resin is used, which has the advantages of low cost, homogeneity of distribution, solubility in water, high reactivity in an acid medium, and high adhesion to wood. On the other hand, as a disadvantage, there is low resistance to humidity and the release of formaldehyde. The objective of the study was to determine the viability of adding industrial boiler ash to the urea formaldehyde-based adhesive for the production of medium-density fiberboard. The raw material used was composed of Pinus spp fibers, urea-formaldehyde resin, paraffin emulsion, ammonium sulfate, and boiler ash. The experimental plan, consisting of 8 treatments, was completely randomized with a factorial arrangement, with 0%, 1%, 3%, and 5% ash added to the adhesive, with and without the application of a catalyst. In each treatment, 4 panels were produced with density of 750 kg.m⁻³, dimensions of 40 x 40 x 1,5 cm, 12% urea formaldehyde resin, 1% paraffin emulsion and hot pressing at a temperature of 180ºC, the pressure of 40 kgf/cm⁻² for a time of 10 minutes. The different compositions of the adhesive were characterized in terms of viscosity, pH, gel time and solids, and the panels by physical and mechanical properties, in addition to evaluation using the IMAL DPX300 X-ray densitometer and formaldehyde emission by the perforator method. The results showed a significant reduction of all adhesive properties with the use of the catalyst, regardless of the treatment; while the percentage increase of ashes provided an increase in the average values of viscosity, gel time, and solids and a reduction in pH for the panels with a catalyst; for panels without catalyst, the behavior was the opposite, with the exception of solids. For the physical properties, the results of the variables of density, compaction ratio, and thickness were equivalent and in accordance with the standard, while the moisture content was significantly reduced with the use of the catalyst but without the influence of the percentage of ash. The density profile for all treatments was characteristic of medium-density fiberboard, with more compacted and dense surfaces when compared to the central layer. For thickness, the swelling was not influenced by the catalyst and the use of ash, presenting average values within the normalized parameters. For mechanical properties, the influence of ashes on the adhesive was negatively observed in the modulus of rupture from 1% and in the traction test from 3%; however, only this last property, in the percentages of 3% and 5%, were below the minimum limit of the norm. The use of catalyst and ashes with percentages of 3% and 5% reduced the formaldehyde emission of the panels; however, only the panels that used adhesive with catalyst presented emissions below 8mg of formaldehyde / 100g of the panel. In this way, it can be said that boiler ash can be added to the adhesive with a catalyst without impairing the technological properties by up to 1%.Keywords: reconstituted wood panels, formaldehyde emission, technological properties of panels, perforator
Procedia PDF Downloads 72874 An Experimental Study on the Coupled Heat Source and Heat Sink Effects on Solid Rockets
Authors: Vinayak Malhotra, Samanyu Raina, Ajinkya Vajurkar
Abstract:
Enhancing the rocket efficiency by controlling the external factors in solid rockets motors has been an active area of research for most of the terrestrial and extra-terrestrial system operations. Appreciable work has been done, but the complexity of the problem has prevented thorough understanding due to heterogenous heat and mass transfer. On record, severe issues have surfaced amounting to irreplaceable loss of mankind, instruments, facilities, and huge amount of money being invested every year. The coupled effect of an external heat source and external heat sink is an aspect yet to be articulated in combustion. Better understanding of this coupled phenomenon will induce higher safety standards, efficient missions, reduced hazard risks, with better designing, validation, and testing. The experiment will help in understanding the coupled effect of an external heat sink and heat source on the burning process, contributing in better combustion and fire safety, which are very important for efficient and safer rocket flights and space missions. Safety is the most prevalent issue in rockets, which assisted by poor combustion efficiency, emphasizes research efforts to evolve superior rockets. This signifies real, engineering, scientific, practical, systems and applications. One potential application is Solid Rocket Motors (S.R.M). The study may help in: (i) Understanding the effect on efficiency of core engines due to the primary boosters if considered as source, (ii) Choosing suitable heat sink materials for space missions so as to vary the efficiency of the solid rocket depending on the mission, (iii) Giving an idea about how the preheating of the successive stage due to previous stage acting as a source may affect the mission. The present work governs the temperature (resultant) and thus the heat transfer which is expected to be non-linear because of heterogeneous heat and mass transfer. The study will deepen the understanding of controlled inter-energy conversions and the coupled effect of external source/sink(s) surrounding the burning fuel eventually leading to better combustion thus, better propulsion. The work is motivated by the need to have enhanced fire safety and better rocket efficiency. The specific objective of the work is to understand the coupled effect of external heat source and sink on propellant burning and to investigate the role of key controlling parameters. Results as of now indicate that there exists a singularity in the coupled effect. The dominance of the external heat sink and heat source decides the relative rocket flight in Solid Rocket Motors (S.R.M).Keywords: coupled effect, heat transfer, sink, solid rocket motors, source
Procedia PDF Downloads 223873 Impact of the Dog-Technic for D1-D4 and Longitudinal Stroke Technique for Diaphragm on Peak Expiratory Flow (PEF) in Asthmatic Patients
Authors: Victoria Eugenia Garnacho-Garnacho, Elena Sonsoles Rodriguez-Lopez, Raquel Delgado-Delgado, Alvaro Otero-Campos, Jesus Guodemar-Perez, Angelo Michelle Vagali, Juan Pablo Hervas-Perez
Abstract:
Asthma is a heterogeneous disease which has always had a drug treatment. Osteopathic treatment that we propose is aimed, seen through a dorsal manipulation (Dog Technic D1-D4) and a technique for diaphragm (Longitudinal Stroke) forced expiratory flow in spirometry changes there are in particular that there is an increase in the volumes of the Peak Flow and Post intervention and effort and that the application of these two techniques together is more powerful if we applied only a Longitudinal (Stroke). Also rating if this type of treatment will have repercussions on breathlessness, a very common symptom in asthma. And finally to investigate if provided vertebra pain decreased after a manipulation. Methods—Participants were recruited between students and professors of the University, aged 18-65, patients (n = 18) were assigned randomly to one of the two groups, group 1 (longitudinal Stroke and manipulation dorsal Dog Technic) and group 2 (diaphragmatic technique, Longitudinal Stroke). The statistical analysis is characterized by the comparison of the main indicator of obstruction of via area PEF (peak expiratory flow) in various situations through the peak flow meter Datospir Peak-10. The measurements were carried out in four phases: at rest, after the stress test, after the treatment, after treatment and the stress test. After each stress test was evaluated, through the Borg scale, the level of Dyspnea on each patient, regardless of the group. In Group 1 in addition to these parameters was calculated using an algometer spinous pain before and after the manipulation. All data were taken at the minute. Results—12 Group 1 (Dog Technic and Longitudinal Stroke) patients responded positively to treatment, there was an increase of 5.1% and 6.1% of the post-treatment PEF and post-treatment, and effort. The results of the scale of Borg by which we measure the level of Dyspnea were positive, a 54.95%, patients noted an improvement in breathing. In addition was confirmed through the means of both groups group 1 in which two techniques were applied was 34.05% more effective than group 2 in which applied only a. After handling pain fell by 38% of the cases. Conclusions—The impact of the technique of Dog-Technic for D1-D4 and the Longitudinal Stroke technique for diaphragm in the volumes of peak expiratory flow (PEF) in asthmatic patients were positive, there was a change of the PEF Post intervention and post-treatment, and effort and showed the most effective group in which only a technique was applied. Furthermore this type of treatment decreased facilitated vertebrae pain and was efficient in the improvement of Dyspnea and the general well-being of the patient.Keywords: ANS, asthma, manipulation, manual therapy, osteopathic
Procedia PDF Downloads 288872 Evaluation of Sequential Polymer Flooding in Multi-Layered Heterogeneous Reservoir
Authors: Panupong Lohrattanarungrot, Falan Srisuriyachai
Abstract:
Polymer flooding is a well-known technique used for controlling mobility ratio in heterogeneous reservoirs, leading to improvement of sweep efficiency as well as wellbore profile. However, low injectivity of viscous polymer solution attenuates oil recovery rate and consecutively adds extra operating cost. An attempt of this study is to improve injectivity of polymer solution while maintaining recovery factor, enhancing effectiveness of polymer flooding method. This study is performed by using reservoir simulation program to modify conventional single polymer slug into sequential polymer flooding, emphasizing on increasing of injectivity and also reduction of polymer amount. Selection of operating conditions for single slug polymer including pre-injected water, polymer concentration and polymer slug size is firstly performed for a layered-heterogeneous reservoir with Lorenz coefficient (Lk) of 0.32. A selected single slug polymer flooding scheme is modified into sequential polymer flooding with reduction of polymer concentration in two different modes: Constant polymer mass and reduction of polymer mass. Effects of Residual Resistance Factor (RRF) is also evaluated. From simulation results, it is observed that first polymer slug with the highest concentration has the main function to buffer between displacing phase and reservoir oil. Moreover, part of polymer from this slug is also sacrificed for adsorption. Reduction of polymer concentration in the following slug prevents bypassing due to unfavorable mobility ratio. At the same time, following slugs with lower viscosity can be injected easily through formation, improving injectivity of the whole process. A sequential polymer flooding with reduction of polymer mass shows great benefit by reducing total production time and amount of polymer consumed up to 10% without any downside effect. The only advantage of using constant polymer mass is slightly increment of recovery factor (up to 1.4%) while total production time is almost the same. Increasing of residual resistance factor of polymer solution yields a benefit on mobility control by reducing effective permeability to water. Nevertheless, higher adsorption results in low injectivity, extending total production time. Modifying single polymer slug into sequence of reduced polymer concentration yields major benefits on reducing production time as well as polymer mass. With certain design of polymer flooding scheme, recovery factor can even be further increased. This study shows that application of sequential polymer flooding can be certainly applied to reservoir with high value of heterogeneity since it requires nothing complex for real implementation but just a proper design of polymer slug size and concentration.Keywords: polymer flooding, sequential, heterogeneous reservoir, residual resistance factor
Procedia PDF Downloads 476871 Identification of Peroxisome Proliferator-Activated Receptors α/γ Dual Agonists for Treatment of Metabolic Disorders, Insilico Screening, and Molecular Dynamics Simulation
Authors: Virendra Nath, Vipin Kumar
Abstract:
Background: TypeII Diabetes mellitus is a foremost health problem worldwide, predisposing to increased mortality and morbidity. Undesirable effects of the current medications have prompted the researcher to develop more potential drug(s) against the disease. The peroxisome proliferator-activated receptors (PPARs) are members of the nuclear receptors family and take part in a vital role in the regulation of metabolic equilibrium. They can induce or repress genes associated with adipogenesis, lipid, and glucose metabolism. Aims: Investigation of PPARα/γ agonistic hits were screened by hierarchical virtual screening followed by molecular dynamics simulation and knowledge-based structure-activity relation (SAR) analysis using approved PPAR α/γ dual agonist. Methods: The PPARα/γ agonistic activity of compounds was searched by using Maestro through structure-based virtual screening and molecular dynamics (MD) simulation application. Virtual screening of nuclear-receptor ligands was done, and the binding modes with protein-ligand interactions of newer entity(s) were investigated. Further, binding energy prediction, Stability studies using molecular dynamics (MD) simulation of PPARα and γ complex was performed with the most promising hit along with the structural comparative analysis of approved PPARα/γ agonists with screened hit was done for knowledge-based SAR. Results and Discussion: The silicone chip-based approach recognized the most capable nine hits and had better predictive binding energy as compared to the reference drug compound (Tesaglitazar). In this study, the key amino acid residues of binding pockets of both targets PPARα/γ were acknowledged as essential and were found to be associated in the key interactions with the most potential dual hit (ChemDiv-3269-0443). Stability studies using molecular dynamics (MD) simulation of PPARα and γ complex was performed with the most promising hit and found root mean square deviation (RMSD) stabile around 2Å and 2.1Å, respectively. Frequency distribution data also revealed that the key residues of both proteins showed maximum contacts with a potent hit during the MD simulation of 20 nanoseconds (ns). The knowledge-based SAR studies of PPARα/γ agonists were studied using 2D structures of approved drugs like aleglitazar, tesaglitazar, etc. for successful designing and synthesis of compounds PPARγ agonistic candidates with anti-hyperlipidimic potential.Keywords: computational, diabetes, PPAR, simulation
Procedia PDF Downloads 103870 Freshwater Source of Sapropel for Healthcare
Authors: Ilona Pavlovska, Aneka Klavina, Agris Auce, Ivars Vanadzins, Alise Silova, Laura Komarovska, Linda Paegle, Baiba Silamikele, Linda Dobkevica
Abstract:
Freshwater sapropel is a common material formed by complex biological transformations of Holocene sediments in the water basement of the lakes in Latvia that has the potential to be used as medical mud. Sapropel forms over a long period in shallow waters by slowly decomposing organic sediment and has different compositions depending on the location of the source, surroundings, the water regime, etc. Official geological survey of Latvia lakes, from Latvian lake database (ezeri.lv), used in the selection of the area of the exploration. The multifunctional effect of sapropel on the whole organism explained by its complex chemical and biological structure. This unique, organic substance and its ability to maintain heat for a long time ensures deep tissue warming and has a positive effect on the treatment of various joint and skin diseases. Sapropel is a valuable resource with multiple areas of application. Investigation of sapropel sediments and survey of the five sites selected according to the criteria performed in the current study. Also, our study includes sampling at different depths and their initial treatment, evaluation of external signs, and study of physical-chemical parameters, as well as analysis of biochemical parameters and evaluation of microbiological indicators. The main selection criteria were sapropel deposits depth, hydrological regime, the history of agriculture next to the lake, and the potential exposure to industrial waste. One hundred and five sapropel samples obtained from five lakes (Audzelu, Dunakla, Ivusku, Zielu, and Mazars Kivdalova) during the wintertime. The main goal of the study is to carry out detailed and systematic research on the medical properties of sapropel to be obtained in Latvia, to promote its scientifically based use in balneology, to develop new medical procedures and services, and to promote the development of new exportable products. Latvian freshwater sapropel could be used as raw material for getting sapropel extract and use it as a remedy. All mentioned above brings us to the main question for sapropel usage in medicine, balneology, and pharmacy “how to develop quality criteria for raw sapropel and its extracts. The research was co-financed by the project "Analysis of characteristics of medical sapropel and its usage for medical purposes and elaboration of industrial extraction methods" No.1.1.1.1/16/A/165.Keywords: balneology, extracts, freshwater sapropel, Latvian lakes, medical mud, sapropel
Procedia PDF Downloads 265869 Biology and Life Fertility of the Cabbage Aphid, Brevicoryne brassicae (L) on Cauliflower Cultivars
Authors: Mandeep Kaur, K. C. Sharma, P. L. Sharma, R. S. Chandel
Abstract:
Cauliflower is an important vegetable crop grown throughout the world and is attacked by a large number of insect pests at various stages of the crop growth. Amongst them, the cabbage aphid, Brevicoryne brassicae (Linnaeus) (Hemiptera: Aphididae) is an important insect pest. Continued feeding by both nymphs and adults of this aphid causes yellowing, wilting and stunting of plants. Amongst various management practices, the use of resistant cultivars is important and can be an effective method of reducing the population of this aphid. So it is imperative to know the complete record on various biological parameters and life table on specific cultivars. The biology and life fertility of the cabbage aphid were studied on five cauliflower cultivars viz. Megha, Shweta, K-1, PSB-1 and PSBK-25 under controlled temperature conditions of 20 ± 2°C, 70 ± 5% relative humidity and 16:8 h (Light: Dark) photoperiods. For studying biology; apterous viviparous adults were picked up from the laboratory culture of all five cauliflower cultivars after rearing them at least for two generations and placed individually on the desired plants of cauliflower cultivars grown in pots with ten replicates of each. Daily record on the duration of nymphal period, adult longevity, mortality in each stage and the total number of progeny produced per female was made. This biological data were further used to construct life fertility table on each cultivar. Statistical analysis showed that there was a significant difference ( P < 0.05) between the different growth stages and the mean number of laid nymphs. The maximum and minimum growth periods were observed on Shweta and Megha (at par with K-1) cultivars, respectively. The maximum number of nymphs were laid on Shweta cultivar (26.40 nymphs per female) and minimum on Megha (at par with K-1) cultivar (15.20 nymphs per female). The true intrinsic rate of increase (rm) was found to be maximum on Shweta (0.233 nymphs/female/day) followed by PSB K-25 (0.207 nymphs/female/day), PSB-1 (0.203 nymphs/female/day), Megha (0.166 nymphs/female/day) and K-1 (0.153 nymphs/female/day). The finite rate of natural increase (λ) was also found to be in the order: K-1 < Megha < PSB-1 < PSBK-25 < Shweta whereas the doubling time (DT) was in the order of K-1 >Megha> PSB-1 >PSBk-25> Shweta. The aphids reared on the K-1 cultivar had the lowest values of rm & λ and the highest value of DT whereas on Shweta cultivar the values of rm & λ were the highest and the lowest value of DT. So on the basis of these studies, K-1 cultivar was found to be the least suitable and the Shweta cultivar was the most suitable for the cabbage aphid population growth. Although the cauliflower cultivars used in different parts of the world may be different yet the results of the present studies indicated that the application of cultivars affecting multiplication rate and reproductive parameters could be a good solution for the management of the cabbage aphid.Keywords: biology, cauliflower, cultivars, fertility
Procedia PDF Downloads 184868 Development of a Reduced Multicomponent Jet Fuel Surrogate for Computational Fluid Dynamics Application
Authors: Muhammad Zaman Shakir, Mingfa Yao, Zohaib Iqbal
Abstract:
This study proposed four Jet fuel surrogate (S1, S2 S3, and 4) with careful selection of seven large hydrocarbon fuel components, ranging from C₉-C₁₆ of higher molecular weight and higher boiling point, adapting the standard molecular distribution size of the actual jet fuel. The surrogate was composed of seven components, including n-propyl cyclohexane (C₉H₁₈), n- propylbenzene (C₉H₁₂), n-undecane (C₁₁H₂₄), n- dodecane (C₁₂H₂₆), n-tetradecane (C₁₄H₃₀), n-hexadecane (C₁₆H₃₄) and iso-cetane (iC₁₆H₃₄). The skeletal jet fuel surrogate reaction mechanism was developed by two approaches, firstly based on a decoupling methodology by describing the C₄ -C₁₆ skeletal mechanism for the oxidation of heavy hydrocarbons and a detailed H₂ /CO/C₁ mechanism for prediction of oxidation of small hydrocarbons. The combined skeletal jet fuel surrogate mechanism was compressed into 128 species, and 355 reactions and thereby can be used in computational fluid dynamics (CFD) simulation. The extensive validation was performed for individual single-component including ignition delay time, species concentrations profile and laminar flame speed based on various fundamental experiments under wide operating conditions, and for their blended mixture, among all the surrogate, S1 has been extensively validated against the experimental data in a shock tube, rapid compression machine, jet-stirred reactor, counterflow flame, and premixed laminar flame over wide ranges of temperature (700-1700 K), pressure (8-50 atm), and equivalence ratio (0.5-2.0) to capture the properties target fuel Jet-A, while the rest of three surrogate S2, S3 and S4 has been validated for Shock Tube ignition delay time only to capture the ignition characteristic of target fuel S-8 & GTL, IPK and RP-3 respectively. Based on the newly proposed HyChem model, another four surrogate with similar components and composition, was developed and parallel validations data was used as followed for previously developed surrogate but at high-temperature condition only. After testing the mechanism prediction performance of surrogates developed by the decoupling methodology, the comparison was done with the results of surrogates developed by the HyChem model. It was observed that all of four proposed surrogates in this study showed good agreement with the experimental measurements and the study comes to this conclusion that like the decoupling methodology HyChem model also has a great potential for the development of oxidation mechanism for heavy alkanes because of applicability, simplicity, and compactness.Keywords: computational fluid dynamics, decoupling methodology Hychem, jet fuel, surrogate, skeletal mechanism
Procedia PDF Downloads 137867 Gross and Clinical Anatomy of the Skull of Adult Chinkara, Gazella bennettii
Authors: Salahud Din, Saima Masood, Hafsa Zaneb, Habib Ur Rehman, Saima Ashraf, Imad Khan, Muqader Shah
Abstract:
The objective of this study was (1) to study gross morphological, osteometric and clinical important landmarks in the skull of adult Chinkara to obtain baseline data and (2) to study sexual dimorphism in male and female adult Chinkara through osteometry. For this purpose, after performing postmortem examination, the carcass of adult Chinkara of known sex and age was buried in the locality of the Manglot Wildlife Park and Ungulate Breeding Centre, Nizampur, Pakistan; after a specific period of time, the bones were unearthed. Gross morphological features and various osteometric parameters of the skull were studied in the University of Veterinary and Animal Sciences, Lahore, Pakistan. The shape of the Chinkara skull was elongated and had thirty-two bones. The skull was comprised of the cranial and the facial part. The facial region of the skull was formed by maxilla, incisive, palatine, vomar, pterygoid, frontal, parietal, nasal, incisive, turbinates, mandible and hyoid apparatus. The bony region of the cranium of Chinkara was comprised of occipital, ethmoid, sphenoid, interparietal, parietal, temporal, and frontal bone. The foramina identified in the facial region of the skull of Chinkara were infraorbital, supraorbital foramen, lacrimal, sphenopalatine, maxillary and caudal palatine foramina. The foramina of the cranium of the skull of the Chinkara were the internal acoustic meatus, external acoustic meatus, hypoglossal canal, transverse canal, sphenorbital fissure, carotid canal, foramen magnum, stylomastoid foramen, foramen rotundum, foramen ovale and jugular foramen, and the rostral and the caudal foramina that formed the pterygoid canal. The measured craniometric parameters did not show statistically significant differences (p > 0.05) between male and female adult Chinkara except Palatine bone, OI, DO, IOCDE, OCT, ICW, IPCW, and PCPL were significantly higher (p > 0.05) in male than female Chinkara and mean values of the mandibular parameters except b and h were significantly (p < 0.5) higher in male Chinkara than female Chinkara. Sexual dimorphism exists in some of the orbital and foramen magnum parameters, while high levels of sexual dimorphism identified in mandible. In conclusion, morphocraniometric studies of Chinkara skull made it possible to identify species-specific skull and use clinical measurements during practical application.Keywords: Chinkara, skull, morphology, morphometrics, sexual dimorphism
Procedia PDF Downloads 284866 Proactive SoC Balancing of Li-ion Batteries for Automotive Application
Authors: Ali Mashayekh, Mahdiye Khorasani, Thomas weyh
Abstract:
The demand for battery electric vehicles (BEV) is steadily increasing, and it can be assumed that electric mobility will dominate the market for individual transportation in the future. Regarding BEVs, the focus of state-of-the-art research and development is on vehicle batteries since their properties primarily determine vehicles' characteristic parameters, such as price, driving range, charging time, and lifetime. State-of-the-art battery packs consist of invariable configurations of battery cells, connected in series and parallel. A promising alternative is battery systems based on multilevel inverters, which can alter the configuration of the battery cells during operation via semiconductor switches. The main benefit of such topologies is that a three-phase AC voltage can be directly generated from the battery pack, and no separate power inverters are required. Therefore, modular battery systems based on different multilevel inverter topologies and reconfigurable battery systems are currently under investigation. Another advantage of the multilevel concept is that the possibility to reconfigure the battery pack allows battery cells with different states of charge (SoC) to be connected in parallel, and thus low-loss balancing can take place between such cells. In contrast, in conventional battery systems, parallel connected (hard-wired) battery cells are discharged via bleeder resistors to keep the individual SoCs of the parallel battery strands balanced, ultimately reducing the vehicle range. Different multilevel inverter topologies and reconfigurable batteries have been described in the available literature that makes the before-mentioned advantages possible. However, what has not yet been described is how an intelligent operating algorithm needs to look like to keep the SoCs of the individual battery strands of a modular battery system with integrated power electronics balanced. Therefore, this paper suggests an SoC balancing approach for Battery Modular Multilevel Management (BM3) converter systems, which can be similarly used for reconfigurable battery systems or other multilevel inverter topologies with parallel connectivity. The here suggested approach attempts to simultaneously utilize all converter modules (bypassing individual modules should be avoided) because the parallel connection of adjacent modules reduces the phase-strand's battery impedance. Furthermore, the presented approach tries to reduce the number of switching events when changing the switching state combination. Thereby, the ohmic battery losses and switching losses are kept as low as possible. Since no power is dissipated in any designated bleeder resistors and no designated active balancing circuitry is required, the suggested approach can be categorized as a proactive balancing approach. To verify the algorithm's validity, simulations are used.Keywords: battery management system, BEV, battery modular multilevel management (BM3), SoC balancing
Procedia PDF Downloads 120865 Measuring Systems Interoperability: A Focal Point for Standardized Assessment of Regional Disaster Resilience
Authors: Joel Thomas, Alexa Squirini
Abstract:
The key argument of this research is that every element of systems interoperability is an enabler of regional disaster resilience, and arguably should become a focal point for standardized measurement of communities’ ability to work together. Few resilience research efforts have focused on the development and application of solutions that measurably improve communities’ ability to work together at a regional level, yet a majority of the most devastating and disruptive disasters are those that have had a regional impact. The key findings of the research include a unique theoretical, mathematical, and operational approach to tangibly and defensibly measure and assess systems interoperability required to support crisis information management activities performed by governments, the private sector, and humanitarian organizations. A most effective way for communities to measurably improve regional disaster resilience is through deliberately executed disaster preparedness activities. Developing interoperable crisis information management capabilities is a crosscutting preparedness activity that greatly affects a community’s readiness and ability to work together in times of crisis. Thus, improving communities’ human and technical posture to work together in advance of a crisis, with the ultimate goal of enabling information sharing to support coordination and the careful management of available resources, is a primary means by which communities may improve regional disaster resilience. This model describes how systems interoperability can be qualitatively and quantitatively assessed when characterized as five forms of capital: governance; standard operating procedures; technology; training and exercises; and usage. The unique measurement framework presented defines the relationships between systems interoperability, information sharing and safeguarding, operational coordination, community preparedness and regional disaster resilience, and offers a means by which to implement real-world solutions and measure progress over the course of a multi-year program. The model is being developed and piloted in partnership with the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T) and the North Atlantic Treaty Organization (NATO) Advanced Regional Civil Emergency Coordination Pilot (ARCECP) with twenty-three organizations in Bosnia and Herzegovina, Croatia, Macedonia, and Montenegro. The intended effect of the model implementation is to enable communities to answer two key questions: 'Have we measurably improved crisis information management capabilities as a result of this effort?' and, 'As a result, are we more resilient?'Keywords: disaster, interoperability, measurement, resilience
Procedia PDF Downloads 143864 Strengthening Social and Psychological Resources - Project "Herausforderung" as a (Sports-) Pedagogical Concept in Adolescence
Authors: Kristof Grätz
Abstract:
Background: Coping with crisis situations (e.g., the identity crisis in adolescence) is omnipresent in today's socialization and should be encouraged as a child. For this reason, students should be given the opportunity to create, endure and manage these crisis situations in a sporting context within the project “Herausforderung.” They should prove themselves by working on a self-assigned task, accompanied by ‚coaches’ in a place outside of their hometown. The aim of the project is to observe this process from a resource-oriented perspective. Health promotion, as called for by the WHO in the Ottawa Charter since 1986, includes strengthening psychosocial resources. These include cognitive, emotional, and social potentials that contribute to improving the quality of life, provide favourable conditions for coping with health burdens and enable people to influence their physical performance and well-being self-confidently and actively. A systematic strengthening of psychosocial resources leads to an improvement in mental health and contributes decisively to the regular implementation and long-term maintenance of this health behavior. Previous studies have already shown significant increases in self-concept following experiential educational measures [Fengler, 2007; Eberle & Fengler, 2018] and positive effects of experience-based school trips on the social competence of students [Reuker, 2009]. Method: The research project examines the influence of the project “Herausforderung” on psychosocial resources such as self-efficacy, self-concept, social support, and group cohesion. The students participating in the project will be tested in a pre-post design in the context of the challenge. This test includes specific questions to capture the different psychosocial resources. For the measurement, modifications of existing scales with good item selectivity and reliability are used to a large extent, so that acceptable item and scale values can be expected. If necessary, the scales were adapted or shortened to the specific context in order to ensure a balanced relationship between reliability and test economy. Specifically, these are already tested scales such as FRKJ 8-16, FSKN, GEQ, and F-SozU. The aim is to achieve a sample size of n ≥ 100. Conclusion: The project will be reviewed with regard to its effectiveness, and implications for a resource-enhancing application in sports settings will be given. Conclusions are drawn as to which extent to specific experiential educational content in physical education can have a health-promoting effect on the participants.Keywords: children, education, health promotion, psychosocial resources
Procedia PDF Downloads 146863 Simulation of Solar Assisted Absorption Cooling and Electricity Generation along with Thermal Storage
Authors: Faezeh Mosallat, Eric L. Bibeau, Tarek El Mekkawy
Abstract:
Availability of a wide variety of renewable resources, such as large reserves of hydro, biomass, solar and wind in Canada provides significant potential to improve the sustainability of energy uses. As buildings represent a considerable portion of energy use in Canada, application of distributed solar energy systems for heating and cooling may increase the amount of renewable energy use. Parabolic solar trough systems have seen limited deployments in cold northern climates as they are more suitable for electricity production in southern latitudes. Heat production by concentrating solar rays using parabolic troughs can overcome the poor efficiencies of flat panels and evacuated tubes in cold climates. A numerical dynamic model is developed to simulate an installed parabolic solar trough facility in Winnipeg. The results of the numerical model are validated using the experimental data obtained from this system. The model is developed in Simulink and will be utilized to simulate a tri-generation system for heating, cooling and electricity generation in remote northern communities. The main objective of this simulation is to obtain operational data of solar troughs in cold climates as this is lacking in the literature. In this paper, the validated Simulink model is applied to simulate a solar assisted absorption cooling system along with electricity generation using organic Rankine cycle (ORC) and thermal storage. A control strategy is employed to distribute the heated oil from solar collectors among the above three systems considering the temperature requirements. This modeling provides dynamic performance results using real time minutely meteorological data which are collected at the same location the solar system is installed. This is a big step ahead of the current models by accurately calculating the available solar energy at each time step considering the solar radiation fluctuations due to passing clouds. The solar absorption cooling is modeled to use the generated heat from the solar trough system and provide cooling in summer for a greenhouse which is located next to the solar field. A natural gas water heater provides the required excess heat for the absorption cooling at low or no solar radiation periods. The results of the simulation are presented for a summer month in Winnipeg which includes the amount of generated electric power from ORC and contribution of solar energy in the cooling load provisionKeywords: absorption cooling, parabolic solar trough, remote community, validated model
Procedia PDF Downloads 216862 Enabling Self-Care and Shared Decision Making for People Living with Dementia
Authors: Jonathan Turner, Julie Doyle, Laura O’Philbin, Dympna O’Sullivan
Abstract:
People living with dementia should be at the centre of decision-making regarding goals for daily living. These goals include basic activities (dressing, hygiene, and mobility), advanced activities (finances, transportation, and shopping), and meaningful activities that promote well-being (pastimes and intellectual pursuits). However, there is limited involvement of people living with dementia in the design of technology to support their goals. A project is described that is co-designing intelligent computer-based support for, and with, people affected by dementia and their carers. The technology will support self-management, empower participation in shared decision-making with carers and help people living with dementia remain healthy and independent in their homes for longer. It includes information from the patient’s care plan, which documents medications, contacts, and the patient's wishes on end-of-life care. Importantly for this work, the plan can outline activities that should be maintained or worked towards, such as exercise or social contact. The authors discuss how to integrate care goal information from such a care plan with data collected from passive sensors in the patient’s home in order to deliver individualized planning and interventions for persons with dementia. A number of scientific challenges are addressed: First, to co-design with dementia patients and their carers computerized support for shared decision-making about their care while allowing the patient to share the care plan. Second, to develop a new and open monitoring framework with which to configure sensor technologies to collect data about whether goals and actions specified for a person in their care plan are being achieved. This is developed top-down by associating care quality types and metrics elicited from the co-design activities with types of data that can be collected within the home, from passive and active sensors, and from the patient’s feedback collected through a simple co-designed interface. These activities and data will be mapped to appropriate sensors and technological infrastructure with which to collect the data. Third, the application of machine learning models to analyze data collected via the sensing devices in order to investigate whether and to what extent activities outlined via the care plan are being achieved. The models will capture longitudinal data to track disease progression over time; as the disease progresses and captured data show that activities outlined in the care plan are not being achieved, the care plan may recommend alternative activities. Disease progression may also require care changes, and a data-driven approach can capture changes in a condition more quickly and allow care plans to evolve and be updated.Keywords: care goals, decision-making, dementia, self-care, sensors
Procedia PDF Downloads 170861 Application of Metaverse Service to Construct Nursing Education Theory and Platform in the Post-pandemic Era
Authors: Chen-Jung Chen, Yi-Chang Chen
Abstract:
While traditional virtual reality and augmented reality only allow for small movement learning and cannot provide a truly immersive teaching experience to give it the illusion of movement, the new technology of both content creation and immersive interactive simulation of the metaverse can just reach infinite close to the natural teaching situation. However, the mixed reality virtual classroom of metaverse has not yet explored its theory, and it is rarely implemented in the situational simulation teaching of nursing education. Therefore, in the first year, the study will intend to use grounded theory and case study methods and in-depth interviews with nursing education and information experts. Analyze the interview data to investigate the uniqueness of metaverse development. The proposed analysis will lead to alternative theories and methods for the development of nursing education. In the second year, it will plan to integrate the metaverse virtual situation simulation technology into the alternate teaching strategy in the pediatric nursing technology course and explore the nursing students' use of this teaching method as the construction of personal technology and experience. By leveraging the unique features of distinct teaching platforms and developing processes to deliver alternative teaching strategies in a nursing technology teaching environment. The aim is to increase learning achievements without compromising teaching quality and teacher-student relationships in the post-pandemic era. A descriptive and convergent mixed methods design will be employed. Sixty third-grade nursing students will be recruited to participate in the research and complete the pre-test. The students in the experimental group (N=30) agreed to participate in 4 real-time mixed virtual situation simulation courses in self-practice after class and conducted qualitative interviews after each 2 virtual situation courses; the control group (N=30) adopted traditional practice methods of self-learning after class. Both groups of students took a post-test after the course. Data analysis will adopt descriptive statistics, paired t-tests, one-way analysis of variance, and qualitative content analysis. This study addresses key issues in the virtual reality environment for teaching and learning within the metaverse, providing valuable lessons and insights for enhancing the quality of education. The findings of this study are expected to contribute useful information for the future development of digital teaching and learning in nursing and other practice-based disciplines.Keywords: metaverse, post-pandemic era, online virtual classroom, immersive teaching
Procedia PDF Downloads 68860 Rohingya Problem and the Impending Crisis: Outcome of Deliberate Denial of Citizenship Status and Prejudiced Refugee Laws in South East Asia
Authors: Priyal Sepaha
Abstract:
A refugee crisis is manifested by challenges, both for the refugees and the asylum giving state. The situation turns into a mega-crisis when the situation is prejudicially handled by the home state, inappropriate refugee laws, exploding refugee population, and above all, no hope of any foreseeable solution or remedy. This paper studies the impact on the capability of stateless Rohingyas to migrate and seek refuge due to the enforcement of rigid criteria of movement imposed both by Myanmar as well as the adjoining countries in the name of national security. This theoretical study identifies the issues and the key factors and players which have precipitated the crisis. It further discusses the possible ramifications in the home, asylum giving, and the adjoining countries for not discharging their roles aptly. Additionally, an attempt has been made to understand the scarce response given to the impending crisis by the regional organizations like SAARC, ASEAN and CHOGAM as well as international organizations like United Nations Human Rights Council, Security Council, Office of High Commissioner for Refugees and so on, in the name of inadequacy of monetary funds and physical resources. Based on the refugee laws and practices pertaining to the case of Rohingyas, this paper analyses that the Rohingya Crisis is in dire need of an effective action plan to curb and resolve the biggest humanitarian crisis situation of the century. This mounting human tragedy can be mitigated permanently, by strengthening existing and creating new interdependencies among all stakeholders, as further ignorance can drive the countries of the Indian Sub-continent, in particular, and South East Asia, by and large into a violent civil war for seizing long-awaited civil rights by the marginalized Rohingyas. To curb this mass crisis, it will require the application of coercive pressure and diplomatic pursuance on the home country to acknowledge the rights of its fleeing citizens. This further necessitates mustering adequate monetary funds and physical resources for the asylum providing state. Additional challenges such as devising mechanisms for the refugee’s safe return, comprehensive planning for their holistic economic development and rehabilitation plan are needed. These, however, can only come into effect with a conscious strive by the regional and international community to fulfil their assigned role.Keywords: asylum, citizenship, crisis, humanitarian, human rights, refugee, rohingya
Procedia PDF Downloads 132859 Role of Yeast-Based Bioadditive on Controlling Lignin Inhibition in Anaerobic Digestion Process
Authors: Ogemdi Chinwendu Anika, Anna Strzelecka, Yadira Bajón-Fernández, Raffaella Villa
Abstract:
Anaerobic digestion (AD) has been used since time in memorial to take care of organic wastes in the environment, especially for sewage and wastewater treatments. Recently, the rising demand/need to increase renewable energy from organic matter has caused the AD substrates spectrum to expand and include a wider variety of organic materials such as agricultural residues and farm manure which is annually generated at around 140 billion metric tons globally. The problem, however, is that agricultural wastes are composed of materials that are heterogeneous and too difficult to degrade -particularly lignin, that make up about 0–40% of the total lignocellulose content. This study aimed to evaluate the impact of varying concentrations of lignin on biogas yields and their subsequent response to a commercial yeast-based bioadditive in batch anaerobic digesters. The experiments were carried out in batches for a retention time of 56 days with different lignin concentrations (200 mg, 300 mg, 400 mg, 500 mg, and 600 mg) treated to different conditions to first determine the concentration of the bioadditive that was most optimal for overall process improvement and yields increase. The batch experiments were set up using 130 mL bottles with a working volume of 60mL, maintained at 38°C in an incubator shaker (150rpm). Digestate obtained from a local plant operating at mesophilic conditions was used as the starting inoculum, and commercial kraft lignin was used as feedstock. Biogas measurements were carried out using the displacement method and were corrected to standard temperature and pressure using standard gas equations. Furthermore, the modified Gompertz equation model was used to non-linearly regress the resulting data to estimate gas production potential, production rates, and the duration of lag phases as indicatives of degrees of lignin inhibition. The results showed that lignin had a strong inhibitory effect on the AD process, and the higher the lignin concentration, the more the inhibition. Also, the modelling showed that the rates of gas production were influenced by the concentrations of the lignin substrate added to the system – the higher the lignin concentrations in mg (0, 200, 300, 400, 500, and 600) the lower the respective rate of gas production in ml/gVS.day (3.3, 2.2, 2.3, 1.6, 1.3, and 1.1), although the 300 mg increased by 0.1 ml/gVS.day over that of the 200 mg. The impact of the yeast-based bioaddition on the rate of production was most significant in the 400 mg and 500 mg as the rate was improved by 0.1 ml/gVS.day and 0.2 ml/gVS.day respectively. This indicates that agricultural residues with higher lignin content may be more responsive to inhibition alleviation by yeast-based bioadditive; therefore, further study on its application to the AD of agricultural residues of high lignin content will be the next step in this research.Keywords: anaerobic digestion, renewable energy, lignin valorisation, biogas
Procedia PDF Downloads 91858 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method
Authors: Dangut Maren David, Skaf Zakwan
Abstract:
Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.Keywords: prognostics, data-driven, imbalance classification, deep learning
Procedia PDF Downloads 174857 Green Organic Chemistry, a New Paradigm in Pharmaceutical Sciences
Authors: Pesaru Vigneshwar Reddy, Parvathaneni Pavan
Abstract:
Green organic chemistry which is the latest and one of the most researched topics now-a- days has been in demand since 1990’s. Majority of the research in green organic chemistry chemicals are some of the important starting materials for greater number of major chemical industries. The production of organic chemicals has raw materials (or) reagents for other application is major sector of manufacturing polymers, pharmaceuticals, pesticides, paints, artificial fibers, food additives etc. organic synthesis on a large scale compound to the labratory scale, involves the use of energy, basic chemical ingredients from the petro chemical sectors, catalyst and after the end of the reaction, seperation, purification, storage, packing distribution etc. During these processes there are many problems of health and safety for workers in addition to the environmental problems caused there by use and deposition as waste. Green chemistry with its 12 principles would like to see changes in conventional way that were used for decades to make synthetic organic chemical and the use of less toxic starting materials. Green chemistry would like to increase the efficiency of synthetic methods, to use less toxic solvents, reduce the stage of synthetic routes and minimize waste as far as practically possible. In this way, organic synthesis will be part of the effort for sustainable development Green chemistry is also interested for research and alternatives innovations on many practical aspects of organic synthesis in the university and research labaratory of institutions. By changing the methodologies of organic synthesis, health and safety will be advanced in the small scale laboratory level but also will be extended to the industrial large scale production a process through new techniques. The three key developments in green chemistry include the use of super critical carbondioxide as green solvent, aqueous hydrogen peroxide as an oxidising agent and use of hydrogen in asymmetric synthesis. It also focuses on replacing traditional methods of heating with that of modern methods of heating like microwaves traditions, so that carbon foot print should reduces as far as possible. Another beneficiary of this green chemistry is that it will reduce environmental pollution through the use of less toxic reagents, minimizing of waste and more bio-degradable biproducts. In this present paper some of the basic principles, approaches, and early achievements of green chemistry has a branch of chemistry that studies the laws of passing of chemical reactions is also considered, with the summarization of green chemistry principles. A discussion about E-factor, old and new synthesis of ibuprofen, microwave techniques, and some of the recent advancements also considered.Keywords: energy, e-factor, carbon foot print, micro-wave, sono-chemistry, advancement
Procedia PDF Downloads 306856 Microwave Dielectric Constant Measurements of Titanium Dioxide Using Five Mixture Equations
Authors: Jyh Sheen, Yong-Lin Wang
Abstract:
This research dedicates to find a different measurement procedure of microwave dielectric properties of ceramic materials with high dielectric constants. For the composite of ceramic dispersed in the polymer matrix, the dielectric constants of the composites with different concentrations can be obtained by various mixture equations. The other development of mixture rule is to calculate the permittivity of ceramic from measurements on composite. To do this, the analysis method and theoretical accuracy on six basic mixture laws derived from three basic particle shapes of ceramic fillers have been reported for dielectric constants of ceramic less than 40 at microwave frequency. Similar researches have been done for other well-known mixture rules. They have shown that both the physical curve matching with experimental results and low potential theory error are important to promote the calculation accuracy. Recently, a modified of mixture equation for high dielectric constant ceramics at microwave frequency has also been presented for strontium titanate (SrTiO3) which was selected from five more well known mixing rules and has shown a good accuracy for high dielectric constant measurements. However, it is still not clear the accuracy of this modified equation for other high dielectric constant materials. Therefore, the five more well known mixing rules are selected again to understand their application to other high dielectric constant ceramics. The other high dielectric constant ceramic, TiO2 with dielectric constant 100, was then chosen for this research. Their theoretical error equations are derived. In addition to the theoretical research, experimental measurements are always required. Titanium dioxide is an interesting ceramic for microwave applications. In this research, its powder is adopted as the filler material and polyethylene powder is like the matrix material. The dielectric constants of those ceramic-polyethylene composites with various compositions were measured at 10 GHz. The theoretical curves of the five published mixture equations are shown together with the measured results to understand the curve matching condition of each rule. Finally, based on the experimental observation and theoretical analysis, one of the five rules was selected and modified to a new powder mixture equation. This modified rule has show very good curve matching with the measurement data and low theoretical error. We can then calculate the dielectric constant of pure filler medium (titanium dioxide) by those mixing equations from the measured dielectric constants of composites. The accuracy on the estimating dielectric constant of pure ceramic by various mixture rules will be compared. This modified mixture rule has also shown good measurement accuracy on the dielectric constant of titanium dioxide ceramic. This study can be applied to the microwave dielectric properties measurements of other high dielectric constant ceramic materials in the future.Keywords: microwave measurement, dielectric constant, mixture rules, composites
Procedia PDF Downloads 367855 A Development of English Pronunciation Using Principles of Phonetics for English Major Students at Loei Rajabhat University
Authors: Pongthep Bunrueng
Abstract:
This action research accentuates the outcome of a development in English pronunciation, using principles of phonetics for English major students at Loei Rajabhat University. The research is split into 5 separate modules: 1) Organs of Speech and How to Produce Sounds, 2) Monopthongs, 3) Diphthongs, 4) Consonant sounds, and 5) Suprasegmental Features. Each module followed a 4 step action research process, 1) Planning, 2) Acting, 3) Observing, and 4) Reflecting. The research targeted 2nd year students who were majoring in English Education at Loei Rajabhat University during the academic year of 2011. A mixed methodology employing both quantitative and qualitative research was used, which put theory into action, taking segmental features up to suprasegmental features. Multiple tools were employed which included the following documents: pre-test and post-test papers, evaluation and assessment papers, group work assessment forms, a presentation grading form, an observation of participants form and a participant self-reflection form. All 5 modules for the target group showed that results from the post-tests were higher than those of the pre-tests, with 0.01 statistical significance. All target groups attained results ranging from low to moderate and from moderate to high performance. The participants who attained low to moderate results had to re-sit the second round. During the first development stage, participants attended classes with group participation, in which they addressed planning through mutual co-operation and sharing of responsibility. Analytic induction of strong points for this operation illustrated that learner cognition, comprehension, application, and group practices were all present whereas the participants with weak results could be attributed to biological differences, differences in life and learning, or individual differences in responsiveness and self-discipline. Participants who were required to be re-treated in Spiral 2 received the same treatment again. Results of tests from the 5 modules after the 2nd treatment were that the participants attained higher scores than those attained in the pre-test. Their assessment and development stages also showed improved results. They showed greater confidence at participating in activities, produced higher quality work, and correctly followed instructions for each activity. Analytic induction of strong and weak points for this operation remains the same as for Spiral 1, though there were improvements to problems which existed prior to undertaking the second treatment.Keywords: action research, English pronunciation, phonetics, segmental features, suprasegmental features
Procedia PDF Downloads 299854 Demonstrating the Efficacy of a Low-Cost Carbon Dioxide-Based Cryoablation Device in Veterinary Medicine for Translation to Third World Medical Applications
Authors: Grace C. Kuroki, Yixin Hu, Bailey Surtees, Rebecca Krimins, Nicholas J. Durr, Dara L. Kraitchman
Abstract:
The purpose of this study was to perform a Phase I veterinary clinical trial with a low-cost, carbon-dioxide-based, passive thaw cryoablation device as proof-of-principle for application in pets and translation to third-world treatment of breast cancer. This study was approved by the institutional animal care and use committee. Client-owned dogs with subcutaneous masses, primarily lipomas or mammary cancers, were recruited for the study. Inclusion was based on clinical history, lesion location, preanesthetic blood work, and fine needle aspirate or biopsy confirmation of mass. Informed consent was obtained from the owners for dogs that met inclusion criteria. Ultrasound assessment of mass extent was performed immediately prior to mass cryoablation. Dogs were placed under general anesthesia and sterilely prepared. A stab incision was created to insert a custom 4.19 OD x 55.9 mm length cryoablation probe (Kubanda Cryotherapy) into the mass. Originally designed for treating breast cancer in low resource settings, this device has demonstrated potential in effectively necrosing subcutaneous masses. A dose escalation study of increasing freeze-thaw cycles (5/4/5, 7/5/7, and 10/7/10 min) was performed to assess the size of the iceball/necrotic extent of cryoablation. Each dog was allowed to recover for ~1-2 weeks before surgical removal of the mass. A single mass was treated in seven dogs (2 mammary masses, a sarcoma, 4 lipomas, and 1 adnexal mass) with most masses exceeding 2 cm in any dimension. Mass involution was most evident in the malignant mammary and adnexal mass. Lipomas showed minimal shrinkage prior to surgical removal, but an area of necrosis was evident along the cryoablation probe path. Gross assessment indicated a clear margin of cryoablation along the cryoprobe independent of tumor type. Detailed histopathology is pending, but complete involution of large lipomas appeared to be unlikely with a 10/7/10 protocol. The low-cost, carbon dioxide-based cryotherapy device permits a minimally invasive technique that may be useful for veterinary applications but is also informative of the unlikely resolution of benign adipose breast masses that may be encountered in third world countries.Keywords: cryoablation, cryotherapy, interventional oncology, veterinary technology
Procedia PDF Downloads 131853 Application of Fatty Acid Salts for Antimicrobial Agents in Koji-Muro
Authors: Aya Tanaka, Mariko Era, Shiho Sakai, Takayoshi Kawahara, Takahide Kanyama, Hiroshi Morita
Abstract:
Objectives: Aspergillus niger and Aspergillus oryzae are used as koji fungi in the spot of the brewing. Since koji-muro (room for making koji) was a low level of airtightness, microbial contamination has long been a concern to the alcoholic beverage production. Therefore, we focused on the fatty acid salt which is the main component of soap. Fatty acid salts have been reported to show some antibacterial and antifungal activity. So this study examined antimicrobial activities against Aspergillus and Bacillus spp. This study aimed to find the effectiveness of the fatty acid salt in koji-muro as antimicrobial agents. Materials & Methods: A. niger NBRC 31628, A. oryzae NBRC 5238, A. oryzae (Akita Konno store) and Bacillus subtilis NBRC 3335 were chosen as tested. Nine fatty acid salts including potassium butyrate (C4K), caproate (C6K), caprylate (C8K), caprate (C10K), laurate (C12K), myristate (C14K), oleate (C18:1K), linoleate (C18:2K) and linolenate (C18:3K) at 350 mM and pH 10.5 were used as antimicrobial activity. FASs and spore suspension were prepared in plastic tubes. The spore suspension of each fungus (3.0×104 spores/mL) or the bacterial suspension (3.0×105 CFU/mL) was mixed with each of the fatty acid salts (final concentration of 175 mM). The mixtures were incubated at 25 ℃. Samples were counted at 0, 10, 60, and 180 min by plating (100 µL) on potato dextrose agar. Fungal and bacterial colonies were counted after incubation for 1 or 2 days at 30 ℃. The MIC (minimum inhibitory concentration) is defined as the lowest concentration of drug sufficient for inhibiting visible growth of spore after 10 min of incubation. MICs against fungi and bacteria were determined using the two-fold dilution method. Each fatty acid salt was separately inoculated with 400 µL of Aspergillus spp. or B. subtilis NBRC 3335 at 3.0 × 104 spores/mL or 3.0 × 105 CFU/mL. Results: No obvious change was observed in tested fatty acid salts against A. niger and A. oryzae. However, C12K was the antibacterial effect of 5 log-unit incubated time for 10 min against B. subtilis. Thus, C12K suppressed 99.999 % of bacterial growth. Besides, C10K was the antibacterial effect of 5 log-unit incubated time for 180 min against B. subtilis. C18:1K, C18:2K and C18:3K was the antibacterial effect of 5 log-unit incubated time for 10 min against B. subtilis. However, compared to saturated fatty acid salts to unsaturated fatty acid salts, saturated fatty acid salts are lower cost. These results suggest C12K has potential in the field of koji-muro. It is necessary to evaluate the antimicrobial activity against other fungi and bacteria, in the future.Keywords: Aspergillus, antimicrobial, fatty acid salts, koji-muro
Procedia PDF Downloads 554852 Integrated Geophysical Surveys for Sinkhole and Subsidence Vulnerability Assessment, in the West Rand Area of Johannesburg
Authors: Ramoshweu Melvin Sethobya, Emmanuel Chirenje, Mihlali Hobo, Simon Sebothoma
Abstract:
The recent surge in residential infrastructure development around the metropolitan areas of South Africa has necessitated conditions for thorough geotechnical assessments to be conducted prior to site developments to ensure human and infrastructure safety. This paper appraises the success in the application of multi-method geophysical techniques for the delineation of sinkhole vulnerability in a residential landscape. Geophysical techniques ERT, MASW, VES, Magnetics and gravity surveys were conducted to assist in mapping sinkhole vulnerability, using an existing sinkhole as a constraint at Venterspost town, West of Johannesburg city. A combination of different geophysical techniques and results integration from those proved to be useful in the delineation of the lithologic succession around sinkhole locality, and determining the geotechnical characteristics of each layer for its contribution to the development of sinkholes, subsidence and cavities at the vicinity of the site. Study results have also assisted in the determination of the possible depth extension of the currently existing sinkhole and the location of sites where other similar karstic features and sinkholes could form. Results of the ERT, VES and MASW surveys have uncovered dolomitic bedrock at varying depths around the sites, which exhibits high resistivity values in the range 2500-8000ohm.m and corresponding high velocities in the range 1000-2400 m/s. The dolomite layer was found to be overlain by a weathered chert-poor dolomite layer, which has resistivities between the range 250-2400ohm.m, and velocities ranging from 500-600m/s, from which the large sinkhole has been found to collapse/ cave in. A compiled 2.5D high resolution Shear Wave Velocity (Vs) map of the study area was created using 2D profiles of MASW data, offering insights into the prevailing lithological setup conducive for formation various types of karstic features around the site. 3D magnetic models of the site highlighted the regions of possible subsurface interconnections between the currently existing large sinkhole and the other subsidence feature at the site. A number of depth slices were used to detail the conditions near the sinkhole as depth increases. Gravity surveys results mapped the possible formational pathways for development of new karstic features around the site. Combination and correlation of different geophysical techniques proved useful in delineation of the site geotechnical characteristics and mapping the possible depth extend of the currently existing sinkhole.Keywords: resistivity, magnetics, sinkhole, gravity, karst, delineation, VES
Procedia PDF Downloads 80851 Development of an Innovative Mobile Phone Application for Employment of Persons With Disabilities Toward the Inclusive Society
Authors: Marutani M, Kawajiri H, Usui C, Takai Y, Kawaguchi T
Abstract:
Background: To build the inclusive society, the Japanese government provides “transition support for employment system” for Persons with Disabilities (PWDs). It is, however, difficult to provide appropriate accommodations due to their changeable health conditions. Mobile phone applications (App) are useful to monitor their health conditions and their environments, and effective to improve reasonable accommodations for PWDs. Purpose: This study aimed to develop an App that PWDs input their self-assessment and make their health conditions and environment conditions visible. To attain the goal, we investigated the items of the App for the first step. Methods: Qualitative and descriptive design was used for this study. Study participants were recruited by snowball sampling in July and August 2023. They had to have had minimum of five-years of experience to support PWDs’ employment. Semi-structured interviews were conducted on their assessment regarding PWDs’ conditions of daily activities, their health conditions, and living and working environment. Verbatim transcript was created from each interview content. We extracted the following items in tree groups from each verbatim transcript: daily activities, health conditions, and living and working. Results: Fourteen participants were involved (average years of experience: 10.6 years). Based on the interviews, tree item groups were enriched. The items of daily activities were divided into fifty-five. The example items were as follows: “have meals on one’s style” “feel like slept well” “wake-up time, bedtime, and mealtime are usually fixed.” “commute to the office and work without barriers.” Thirteen items of health conditions were obtained like “feel no anxiety” “relieve stress” “focus on work and training” “have no pain” “have the physical strength to work for one day.” The items of categories of living and working environments were divided into fifteen-two. The example items were as follows: “have no barrier in home” “have supportive family members” “have time to take medication on time while at work” “commute time is just right” “people at the work understand the symptoms” “room temperature and humidity are just right” “get along well with friends in my own way.” The participants also mentioned the styles to input self-assessment like that a face scale would be preferred to number scale. Conclusion: The items were enriched existent paper-based assessment items in terms of living and working environment because those were obtained from the perspective of PWDs. We have to create the app and examine its usefulness with PWDs toward inclusive society.Keywords: occupational health, innovatiove tool, people with disability, employment
Procedia PDF Downloads 55850 Applying Big Data Analysis to Efficiently Exploit the Vast Unconventional Tight Oil Reserves
Authors: Shengnan Chen, Shuhua Wang
Abstract:
Successful production of hydrocarbon from unconventional tight oil reserves has changed the energy landscape in North America. The oil contained within these reservoirs typically will not flow to the wellbore at economic rates without assistance from advanced horizontal well and multi-stage hydraulic fracturing. Efficient and economic development of these reserves is a priority of society, government, and industry, especially under the current low oil prices. Meanwhile, society needs technological and process innovations to enhance oil recovery while concurrently reducing environmental impacts. Recently, big data analysis and artificial intelligence become very popular, developing data-driven insights for better designs and decisions in various engineering disciplines. However, the application of data mining in petroleum engineering is still in its infancy. The objective of this research aims to apply intelligent data analysis and data-driven models to exploit unconventional oil reserves both efficiently and economically. More specifically, a comprehensive database including the reservoir geological data, reservoir geophysical data, well completion data and production data for thousands of wells is firstly established to discover the valuable insights and knowledge related to tight oil reserves development. Several data analysis methods are introduced to analysis such a huge dataset. For example, K-means clustering is used to partition all observations into clusters; principle component analysis is applied to emphasize the variation and bring out strong patterns in the dataset, making the big data easy to explore and visualize; exploratory factor analysis (EFA) is used to identify the complex interrelationships between well completion data and well production data. Different data mining techniques, such as artificial neural network, fuzzy logic, and machine learning technique are then summarized, and appropriate ones are selected to analyze the database based on the prediction accuracy, model robustness, and reproducibility. Advanced knowledge and patterned are finally recognized and integrated into a modified self-adaptive differential evolution optimization workflow to enhance the oil recovery and maximize the net present value (NPV) of the unconventional oil resources. This research will advance the knowledge in the development of unconventional oil reserves and bridge the gap between the big data and performance optimizations in these formations. The newly developed data-driven optimization workflow is a powerful approach to guide field operation, which leads to better designs, higher oil recovery and economic return of future wells in the unconventional oil reserves.Keywords: big data, artificial intelligence, enhance oil recovery, unconventional oil reserves
Procedia PDF Downloads 283849 Application of Principal Component Analysis and Ordered Logit Model in Diabetic Kidney Disease Progression in People with Type 2 Diabetes
Authors: Mequanent Wale Mekonen, Edoardo Otranto, Angela Alibrandi
Abstract:
Diabetic kidney disease is one of the main microvascular complications caused by diabetes. Several clinical and biochemical variables are reported to be associated with diabetic kidney disease in people with type 2 diabetes. However, their interrelations could distort the effect estimation of these variables for the disease's progression. The objective of the study is to determine how the biochemical and clinical variables in people with type 2 diabetes are interrelated with each other and their effects on kidney disease progression through advanced statistical methods. First, principal component analysis was used to explore how the biochemical and clinical variables intercorrelate with each other, which helped us reduce a set of correlated biochemical variables to a smaller number of uncorrelated variables. Then, ordered logit regression models (cumulative, stage, and adjacent) were employed to assess the effect of biochemical and clinical variables on the order-level response variable (progression of kidney function) by considering the proportionality assumption for more robust effect estimation. This retrospective cross-sectional study retrieved data from a type 2 diabetic cohort in a polyclinic hospital at the University of Messina, Italy. The principal component analysis yielded three uncorrelated components. These are principal component 1, with negative loading of glycosylated haemoglobin, glycemia, and creatinine; principal component 2, with negative loading of total cholesterol and low-density lipoprotein; and principal component 3, with negative loading of high-density lipoprotein and a positive load of triglycerides. The ordered logit models (cumulative, stage, and adjacent) showed that the first component (glycosylated haemoglobin, glycemia, and creatinine) had a significant effect on the progression of kidney disease. For instance, the cumulative odds model indicated that the first principal component (linear combination of glycosylated haemoglobin, glycemia, and creatinine) had a strong and significant effect on the progression of kidney disease, with an effect or odds ratio of 0.423 (P value = 0.000). However, this effect was inconsistent across levels of kidney disease because the first principal component did not meet the proportionality assumption. To address the proportionality problem and provide robust effect estimates, alternative ordered logit models, such as the partial cumulative odds model, the partial adjacent category model, and the partial continuation ratio model, were used. These models suggested that clinical variables such as age, sex, body mass index, medication (metformin), and biochemical variables such as glycosylated haemoglobin, glycemia, and creatinine have a significant effect on the progression of kidney disease.Keywords: diabetic kidney disease, ordered logit model, principal component analysis, type 2 diabetes
Procedia PDF Downloads 39