Search results for: Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6236

Search results for: Three Dimensional Modified Quadratic Congruence/Modified Prime (3-D MQC/MP) code

596 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver

Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin

Abstract:

National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.

Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band

Procedia PDF Downloads 350
595 Role of Climatic Conditions on Pacific Bluefin Tuna Thunnus orientalis Stock Structure

Authors: Ashneel Ajay Singh, Kazumi Sakuramoto, Naoki Suzuki, Kalla Alok, Nath Paras

Abstract:

Bluefin (Thunnus orientalis) tuna is one of the most economically valuable tuna species in the world. In recent years the stock has been observed to decline. It is suspected that the stock-recruitment relationship and population structure is influenced by environmental and climatic variables. This study was aimed at investigating the influence of environmental and climatic conditions on the trajectory of the different life stages of the North Pacific bluefin tuna. Exploratory analysis was performed for the North Pacific sea surface temperature (SST) and Pacific Decadal Oscillation (PDO) on the time series of the bluefin tuna cohorts (age-0, 1, 2,…,9, 10+). General Additive Modeling (GAM) was used to reconstruct the recruitment (R) trajectory. The spatial movement of the SST was also monitored from 1953 to 2012 in the distribution area of the bluefin tuna. Exploratory analysis showed significance influence of the North Pacific Sea Surface temperature (SST) and Pacific Decadal Oscillation (PDO) on the time series of the age-0 group. Other age group (1, 2,…,9, 10+) time series did not exhibit any significant correlations. PDO showed most significant relationship in the months of October to December. Although the stock-recruitment relationship is of biological significance, the recruits (age-0) showed poor correlation with the Spawning Stock Biomass (SSB). Indeed the most significant model incorporated the SSB, SST and PDO. The results show that the stock-recruitment relationship of the North Pacific bluefin tuna is multi-dimensional and cannot be adequately explained by the SSB alone. SST and PDO forcing of the population structure is of significant importance and needs to be accounted for when making harvesting plans for bluefin tuna in the North Pacific.

Keywords: pacific bluefin tuna, Thunnus orientalis, cohorts, recruitment, spawning stock biomass, sea surface temperature, pacific decadal oscillation, general additive model

Procedia PDF Downloads 222
594 Optimization of Platinum Utilization by Using Stochastic Modeling of Carbon-Supported Platinum Catalyst Layer of Proton Exchange Membrane Fuel Cells

Authors: Ali Akbar, Seungho Shin, Sukkee Um

Abstract:

The composition of catalyst layers (CLs) plays an important role in the overall performance and cost of the proton exchange membrane fuel cells (PEMFCs). Low platinum loading, high utilization, and more durable catalyst still remain as critical challenges for PEMFCs. In this study, a three-dimensional material network model is developed to visualize the nanostructure of carbon supported platinum Pt/C and Pt/VACNT catalysts in pursuance of maximizing the catalyst utilization. The quadruple-phase randomly generated CLs domain is formulated using quasi-random stochastic Monte Carlo-based method. This unique statistical approach of four-phase (i.e., pore, ionomer, carbon, and platinum) model is closely mimic of manufacturing process of CLs. Various CLs compositions are simulated to elucidate the effect of electrons, ions, and mass transport paths on the catalyst utilization factor. Based on simulation results, the effect of key factors such as porosity, ionomer contents and Pt weight percentage in Pt/C catalyst have been investigated at the represented elementary volume (REV) scale. The results show that the relationship between ionomer content and Pt utilization is in good agreement with existing experimental calculations. Furthermore, this model is implemented on the state-of-the-art Pt/VACNT CLs. The simulation results on Pt/VACNT based CLs show exceptionally high catalyst utilization as compared to Pt/C with different composition ratios. More importantly, this study reveals that the maximum catalyst utilization depends on the distance spacing between the carbon nanotubes for Pt/VACNT. The current simulation results are expected to be utilized in the optimization of nano-structural construction and composition of Pt/C and Pt/VACNT CLs.

Keywords: catalyst layer, platinum utilization, proton exchange membrane fuel cell, stochastic modeling

Procedia PDF Downloads 105
593 An Evaluation on the Effectiveness of a 3D Printed Composite Compression Mold

Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg

Abstract:

The applications of composite materials within the aviation industry has been increasing at a rapid pace.  However, the growing applications of composite materials have also led to growing demand for more tooling to support its manufacturing processes. Tooling and tooling maintenance represents a large portion of the composite manufacturing process and cost. Therefore, the industry’s adaptability to new techniques for fabricating high quality tools quickly and inexpensively will play a crucial role in composite material’s growing popularity in the aviation industry. One popular tool fabrication technique currently being developed involves additive manufacturing such as 3D printing. Although additive manufacturing and 3D printing are not entirely new concepts, the technique has been gaining popularity due to its ability to quickly fabricate components, maintain low material waste, and low cost. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite compression mold. A 3D printed composite compression mold was fabricated by 3D scanning a steel valve cover of an aircraft reciprocating engine. The 3D printed composite compression mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The 3D printed composite compression mold was evaluated for its performance, durability, and dimensional stability while the fabricated carbon fiber valve covers were evaluated for its accuracy and quality. The results and data gathered from this study will determine the effectiveness of the 3D printed composite compression mold in a mass production environment and provide valuable information for future understanding, improvements, and design considerations of 3D printed composite molds.

Keywords: additive manufacturing, carbon fiber, composite tooling, molds

Procedia PDF Downloads 186
592 Chemical and Physical Properties and Biocompatibility of Ti–6Al–4V Produced by Electron Beam Rapid Manufacturing and Selective Laser Melting for Biomedical Applications

Authors: Bing–Jing Zhao, Chang-Kui Liu, Hong Wang, Min Hu

Abstract:

Electron beam rapid manufacturing (EBRM) or Selective laser melting is an additive manufacturing process that uses 3D CAD data as a digital information source and energy in the form of a high-power laser beam or electron beam to create three-dimensional metal parts by fusing fine metallic powders together.Object:The present study was conducted to evaluate the mechanical properties ,the phase transformation,the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM,SLM and forging technique.Method: Ti-6Al-4V alloy standard test pieces were manufactured by EBRM, SLM and forging technique according to AMS4999,GB/T228 and ISO 10993.The mechanical properties were analyzed by universal test machine. The phase transformation was analyzed by X-ray diffraction and scanning electron microscopy. The corrosivity was analyzed by electrochemical method. The biocompatibility was analyzed by co-culturing with mesenchymal stem cell and analyzed by scanning electron microscopy (SEM) and alkaline phosphatase assay (ALP) to evaluate cell adhesion and differentiation, respectively. Results: The mechanical properties, the phase transformation, the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM、SLM were similar to forging and meet the mechanical property requirements of AMS4999 standard. a­phase microstructure for the EBM production contrast to the a’­phase microstructure of the SLM product. Mesenchymal stem cell adhesion and differentiation were well. Conclusion: The property of the Ti-6Al-4V alloy manufactured by EBRM and SLM technique can meet the medical standard from this study. But some further study should be proceeded in order to applying well in clinical practice.

Keywords: 3D printing, Electron Beam Rapid Manufacturing (EBRM), Selective Laser Melting (SLM), Computer Aided Design (CAD)

Procedia PDF Downloads 443
591 Outcome of Emergency Response Team System in In-Hospital Cardiac Arrest

Authors: Jirapat Suriyachaisawat, Ekkit Surakarn

Abstract:

Introduction: To improve early detection and mortality rate of In- Hospital Cardiac arrest, Emergency Response Team (ERT) system was planned and implemented since June 2009 to detect pre-arrest conditions and for any concerns. The ERT consisted of on duty physicians and nurses from emergency department. ERT calling criteria consisted of acute change of HR < 40 or > 130 beats per minute, systolic blood pressure < 90mmHg, respiratory rate <8 or > 28 breaths per minute, O2 saturation < 90%, acute change in conscious state, acute chest pain or worried about the patients. From the data on ERT system implementation in our hospital in early phase (during June 2009-2011), there was no statistic significance in difference in In-Hospital cardiac arrest incidence and overall hospital mortality rate. Since the introduction of the ERT service in our hospital, we have conducted continuous educational campaign to improve awareness in an attempt to increase use of the service. Methods: To investigate outcome of ERT system in In-Hospital cardiac arrest and overall hospital mortality rate. We conducted a prospective, controlled before-and after examination of the long term effect of a ERT system on the incidence of cardiac arrest. We performed Chi -square analysis to find statistic significance. Results: Of a total 623 ERT cases from June 2009 until December 2012, there were 72 calls in 2009, 196 calls in 2010 ,139 calls in 2011 and 245 calls in 2012.The number of ERT calls per 1000 admissions in year 2009-10 was 7.69, 5.61 in 2011 and 9.38 in 2013. The number of Code blue calls per 1000 admissions decreased significantly from 2.28 to 0.99 per 1000 admissions (P value < 0.001). The incidence of cardiac arrest decreased progressively from 1.19 to 0.34 per 1000 admissions and significant in difference in year 2012 (P value < 0.001). The overall hospital mortality rate decreased by 8 % from 15.43 to 14.43 per 1000 admissions (P value 0.095). Conclusions: ERT system implementation was associated with progressive reduction in cardiac arrests over three year period, especially statistic significant in difference in 4th year after implementation. We also found an inverse association between number of ERT use and the risk of occurrence of cardiac arrests, But we have not found difference in overall hospital mortality rate.

Keywords: emergency response team, ERT, cardiac arrest, emergency medicine

Procedia PDF Downloads 294
590 Optimal Beam for Accelerator Driven Systems

Authors: M. Paraipan, V. M. Javadova, S. I. Tyutyunnikov

Abstract:

The concept of energy amplifier or accelerator driven system (ADS) involves the use of a particle accelerator coupled with a nuclear reactor. The accelerated particle beam generates a supplementary source of neutrons, which allows the subcritical functioning of the reactor, and consequently a safe exploitation. The harder neutron spectrum realized ensures a better incineration of the actinides. The almost generalized opinion is that the optimal beam for ADS is represented by protons with energy around 1 GeV (gigaelectronvolt). In the present work, a systematic analysis of the energy gain for proton beams with energy from 0.5 to 3 GeV and ion beams from deuteron to neon with energies between 0.25 and 2 AGeV is performed. The target is an assembly of metallic U-Pu-Zr fuel rods in a bath of lead-bismuth eutectic coolant. The rods length is 150 cm. A beryllium converter with length 110 cm is used in order to maximize the energy released in the target. The case of a linear accelerator is considered, with a beam intensity of 1.25‧10¹⁶ p/s, and a total accelerator efficiency of 0.18 for proton beam. These values are planned to be achieved in the European Spallation Source project. The energy gain G is calculated as the ratio between the energy released in the target to the energy spent to accelerate the beam. The energy released is obtained through simulation with the code Geant4. The energy spent is calculating by scaling from the data about the accelerator efficiency for the reference particle (proton). The analysis concerns the G values, the net power produce, the accelerator length, and the period between refueling. The optimal energy for proton is 1.5 GeV. At this energy, G reaches a plateau around a value of 8 and a net power production of 120 MW (megawatt). Starting with alpha, ion beams have a higher G than 1.5 GeV protons. A beam of 0.25 AGeV(gigaelectronvolt per nucleon) ⁷Li realizes the same net power production as 1.5 GeV protons, has a G of 15, and needs an accelerator length 2.6 times lower than for protons, representing the best solution for ADS. Beams of ¹⁶O or ²⁰Ne with energy 0.75 AGeV, accelerated in an accelerator with the same length as 1.5 GeV protons produce approximately 900 MW net power, with a gain of 23-25. The study of the evolution of the isotopes composition during irradiation shows that the increase in power production diminishes the period between refueling. For a net power produced of 120 MW, the target can be irradiated approximately 5000 days without refueling, but only 600 days when the net power reaches 1 GW (gigawatt).

Keywords: accelerator driven system, ion beam, electrical power, energy gain

Procedia PDF Downloads 123
589 Seismic Assessment of Non-Structural Component Using Floor Design Spectrum

Authors: Amin Asgarian, Ghyslaine McClure

Abstract:

Experiences in the past earthquakes have clearly demonstrated the necessity of seismic design and assessment of Non-Structural Components (NSCs) particularly in post-disaster structures such as hospitals, power plants, etc. as they have to be permanently functional and operational. Meeting this objective is contingent upon having proper seismic performance of both structural and non-structural components. Proper seismic design, analysis, and assessment of NSCs can be attained through generation of Floor Design Spectrum (FDS) in a similar fashion as target spectrum for structural components. This paper presents the developed methodology to generate FDS directly from corresponding Uniform Hazard Spectrum (UHS) (i.e. design spectra for structural components). The methodology is based on the experimental and numerical analysis of a database of 27 real Reinforced Concrete (RC) buildings which are located in Montreal, Canada. The buildings were tested by Ambient Vibration Measurements (AVM) and their dynamic properties have been extracted and used as part of the approach. Database comprises 12 low-rises, 10 medium-rises, and 5 high-rises and they are mostly designated as post-disaster\emergency shelters by the city of Montreal. The buildings are subjected to 20 compatible seismic records to UHS of Montreal and Floor Response Spectra (FRS) are developed for every floors in two horizontal direction considering four different damping ratios of NSCs (i.e. 2, 5, 10, and 20 % viscous damping). Generated FRS (approximately 132’000 curves) are statistically studied and the methodology is proposed to generate the FDS directly from corresponding UHS. The approach is capable of generating the FDS for any selection of floor level and damping ratio of NSCs. It captures the effect of: dynamic interaction between primary (structural) and secondary (NSCs) systems, higher and torsional modes of primary structure. These are important improvements of this approach compared to conventional methods and code recommendations. Application of the proposed approach are represented here through two real case-study buildings: one low-rise building and one medium-rise. The proposed approach can be used as practical and robust tool for seismic assessment and design of NSCs especially in existing post-disaster structures.

Keywords: earthquake engineering, operational and functional components, operational modal analysis, seismic assessment and design

Procedia PDF Downloads 199
588 Evaluation of Solid-Gas Separation Efficiency in Natural Gas Cyclones

Authors: W. I. Mazyan, A. Ahmadi, M. Hoorfar

Abstract:

Objectives/Scope: This paper proposes a mathematical model for calculating the solid-gas separation efficiency in cyclones. This model provides better agreement with experimental results compared to existing mathematical models. Methods: The separation ratio efficiency, ϵsp, is evaluated by calculating the outlet to inlet count ratio. Similar to mathematical derivations in the literature, the inlet and outlet particle count were evaluated based on Eulerian approach. The model also includes the external forces acting on the particle (i.e., centrifugal and drag forces). In addition, the proposed model evaluates the exact length that the particle travels inside the cyclone for the evaluation of number of turns inside the cyclone. The separation efficiency model derivation using Stoke’s law considers the effect of the inlet tangential velocity on the separation performance. In cyclones, the inlet velocity is a very important factor in determining the performance of the cyclone separation. Therefore, the proposed model provides accurate estimation of actual cyclone separation efficiency. Results/Observations/Conclusion: The separation ratio efficiency, ϵsp, is studied to evaluate the performance of the cyclone for particles ranging from 1 microns to 10 microns. The proposed model is compared with the results in the literature. It is shown that the proposed mathematical model indicates an error of 7% between its efficiency and the efficiency obtained from the experimental results for 1 micron particles. At the same time, the proposed model gives the user the flexibility to analyze the separation efficiency at different inlet velocities. Additive Information: The proposed model determines the separation efficiency accurately and could also be used to optimize the separation efficiency of cyclones at low cost through trial and error testing, through dimensional changes to enhance separation and through increasing the particle centrifugal forces. Ultimately, the proposed model provides a powerful tool to optimize and enhance existing cyclones at low cost.

Keywords: cyclone efficiency, solid-gas separation, mathematical model, models error comparison

Procedia PDF Downloads 376
587 Focus Group Study Exploring Researchers Perspective on Open Science Policy

Authors: E. T. Svahn

Abstract:

Knowledge about the factors that influence the exchange between research and society is of the utmost importance for developing collaboration between different actors, especially in future science policy development and the creation of support structures for researchers. Among other things, how researchers look at the surrounding open science policy environment and what conditions and attitudes they have for interacting with it. This paper examines the Finnish researchers' attitudes towards open science policies in 2020. Open science is an integrated part of researchers' daily lives and supports not only the effectiveness of research outputs but also the quality of research. Open science policy in ideal situation is seen as a supporting structure that enables the exchange between research and society, but in other situation, it can end up being red tape generating obstacles and hindering possibilities of making science in an efficient way. Results of this study were carried out through focus group interviews. This qualitative research method was selected because it aims to understand the phenomenon under study. In addition, focus group interviews produce diverse and rich material that would not be available with other research methods. Focus group interviews have well-established applications in social science, especially in understanding the perspectives and experiences of research subjects. In this study, focus groups were used in studying the mindset and actions of researchers. Each group's size was between 4-10 people, and the aim was to bring out different perspectives on the subject. The interviewer enabled the presentation of different perceptions and opinions, and the focus group interviews were recorded and written as text. The material was analysed using grounded theory method. The results are presented as thematic areas, theoretical model, and as direct quotations. Attitudes towards open science policy can vary greatly depending on the research area. This study shows that the open science policy demands in medicine, technology, and natural sciences compared to social sciences, educational sciences, and the humanities, varies somewhat. The variation in attitudes between different research areas can thus be largely explained by the fact that the research output and ethical code vary significantly between certain subjects. This study aims to increase understanding of the nuances to what extent open science policies should be tailored for different disciplines and research areas.

Keywords: focus group interview, grounded theory, open science policy, science policy

Procedia PDF Downloads 134
586 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites

Authors: S. D. El Wakil, M. Pladsen

Abstract:

Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.

Keywords: drilling of composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites

Procedia PDF Downloads 376
585 Adaptability of Steel-Framed Industrialized Building System In Post-Service Life

Authors: Alireza Taghdiri, Sara Ghanbarzade Ghomi

Abstract:

Existing buildings are permanently subjected to change, continuously renovated and repaired in their long service life. Old buildings are destroyed and their material and components are recycled or reused for constructing new ones. In this process, the importance of sustainability principles for building construction is obviously known and great significance must be attached to the consumption of resources, resulting effects on the environment and economic costs. Utilization strategies for extending buildings service life and delay in destroying have a positive effect on environment protection. In addition, simpler alterability or expandability of buildings’ structures and reducing energy and natural resources consumption have benefits for users, producers and the environment. To solve these problems, by applying theories of open building, structural components of some conventional building systems have been analyzed and then, a new geometry adaptive building system is developed which can transform and support different imposed loads. In order to achieve this goal, various research methods and tools such as professional and scientific literatures review, comparative analysis, case study and computer simulation were applied and data interpretation was implemented using descriptive statistics and logical arguments. Therefore, hypothesis and proposed strategies were evaluated and an adaptable and reusable 2-dimensional building system was presented which can respond appropriately to dwellers and end-users needs and provide reusability of structural components of building system in new construction or function. Investigations showed that this incremental building system can be successfully applied in achieving the architectural design objectives and by small modifications on components and joints, it is easy to obtain different and adaptable load-optimized component alternatives for flexible spaces.

Keywords: adaptability, durability, open building, service life, structural building system

Procedia PDF Downloads 416
584 Comparison of the Efficacy of Ketamine-Propofol versus Thiopental Sodium-Fentanyl in Procedural Sedation in the Emergency Department: A Randomized Double-Blind Clinical Trial

Authors: Maryam Bahreini, Mostafa Talebi Garekani, Fatemeh Rasooli, Atefeh Abdollahi

Abstract:

Introduction: Procedural sedation and analgesia have been desirable to handle painful procedures. The trend to find the agent with more efficacy and less complications is still controversial; thus, many sedative regimens have been studied. This study tried to assess the effectiveness and adverse effects of thiopental sodium-fentanyl with the known medication, ketamine-propofol for procedural sedation in the emergency department. Methods: Consenting patients were enrolled in this randomized double-blind trial to receive either 1:1 ketamine-propofol (KP) or thiopental-fentanyl (TF) 1:1 mg: Mg proportion on a weight-based dosing basis to reach the sedation level of American Society of Anesthesiologist class III/IV. The respiratory and hemodynamic complications, nausea and vomiting, recovery agitation, patient recall and satisfaction, provider satisfaction and recovery time were compared. The study was registered in Iranian randomized Control Trial Registry (Code: IRCT2015111325025N1). Results: 96 adult patients were included and randomized, 47 in the KP group and 49 in the TF group. 2.1% in the KP group and 8.1 % in the TF group experienced transient hypoxia leading to performing 4.2 % versus 8.1 % airway maneuvers for 2 groups, respectively; however, no statistically significant difference was observed between 2 combinations, and there was no report of endotracheal placement or further admission. Patient and physician satisfaction were significantly higher in the KP group. There was no difference in respiratory, gastrointestinal, cardiovascular and psychiatric adverse events, recovery time and patient recall of the procedure between groups. The efficacy and complications were not related to the type of procedure or patients’ smoking or addiction trends. Conclusion: Ketamine-propofol and thiopental-fentanyl combinations were effectively comparable although KP resulted in higher patient and provider satisfaction. It is estimated that thiopental fentanyl combination can be as potent and efficacious as ketofol with relatively similar incidence of adverse events in procedural sedation.

Keywords: adverse effects, conscious sedation, fentanyl, propofol, ketamine, safety, thiopental

Procedia PDF Downloads 200
583 Values in Higher Education: A Case Study of Higher Education Students

Authors: Bahadır Erişti

Abstract:

Values are the behavioral procedures of society based communication and interaction process that includes social and cultural backgrounds. The policy of learning and teaching in higher education is oriented towards constructing knowledge and skills, based on theorist framework of cognitive and psychomotor aspects. This approach makes people not to develop generosity, empathy, affection, solidarity, justice, equality and so on. But the sensorial gains of education system provide the integrity of society interaction. This situation carries out the necessity of values education’s in higher education. The current study aims to consider values education from the viewpoint of students in higher education. Within the framework of the current study, an open ended survey based scenario of higher education students was conducted with the students’ social, cognitive, affective and moral developments. In line with this purpose, the following situations of the higher education system were addressed based on the higher education students’ viewpoint: The views of higher education students’ regarding values that are tried to be gained at the higher education system; The higher education students’ suggestions regarding values education at the higher education system; The views of the higher education students’ regarding values that are imposed at the higher education system. In this study, descriptive qualitative research method was used. The study group of the research is composed of 20 higher education postgraduate students at Curriculum and Instruction Department of Educational Sciences at Anadolu University. An open-ended survey was applied for the purpose of collecting qualitative data. As a result of the study, value preferences, value judgments and value systems of the higher education students were constructed on prioritizes based on social, cultural and economic backgrounds and statues. Multi-dimensional process of value education in higher education need to be constructed on higher education-community-cultural background cooperation. Thus, the act of judgement upon values between higher education students based on the survey seems to be inherent in the system of education itself. The present study highlights the students’ value priorities and importance of values in higher education. If the purpose of the higher education system gains on values, it is possible to enable society to promote humanity.

Keywords: higher education, value, values education, values in higher education

Procedia PDF Downloads 315
582 Long-Term Outcome of Emergency Response Team System in In-Hospital Cardiac Arrest

Authors: Jirapat Suriyachaisawat, Ekkit Surakarn

Abstract:

Introduction: To improve early detection and mortality rate of in-hospital cardiac arrest, Emergency Response Team (ERT) system was planned and implemented since June 2009 to detect pre-arrest conditons and for any concerns. The ERT consisted of on duty physicians and nurses from emergency department. ERT calling criteria consisted of acute change of HR < 40 or > 130 beats per minute, systolic blood pressure < 90 mmHg, respiratory rate <8 or >28 breaths per minute, O2 saturation <90%, acute change in conscious state, acute chest pain or worry about the patients. From the data on ERT system implementation in our hospital in early phase (during June 2009-2011), there was no statistic significance in difference in in-hospital cardiac arrest incidence and overall hospital mortality rate. Since the introduction of the ERT service in our hospital, we have conducted continuous educational campaign to improve awareness in an attempt to increase use of the service. Methods: To investigate outcome of ERT system in in-hospital cardiac arrest and overall hospital mortality rate, we conducted a prospective, controlled before-and after examination of the long term effect of a ERT system on the incidence of cardiac arrest. We performed chi-square analysis to find statistic significance. Results: Of a total 623 ERT cases from June 2009 until December 2012, there were 72 calls in 2009, 196 calls in 2010, 139 calls in 2011 and 245 calls in 2012. The number of ERT calls per 1000 admissions in year 2009-10 was 7.69; 5.61 in 2011 and 9.38 in 2013. The number of code blue calls per 1000 admissions decreased significantly from 2.28 to 0.99 per 1000 admissions (P value < 0.001). The incidence of cardiac arrest decreased progressively from 1.19 to 0.34 per 1000 admissions and significant in difference in year 2012 (P value < 0.001 ). The overall hospital mortality rate decreased by 8 % from 15.43 to 14.43 per 1000 admissions (P value 0.095). Conclusions: ERT system implementation was associated with progressive reduction in cardiac arrests over three year period, especially statistic significant in difference in 4th year after implementation. We also found an inverse association between number of ERT use and the risk of occurrence of cardiac arrests, but we have not found difference in overall hospital mortality rate.

Keywords: cardiac arrest, outcome, in-hospital, ERT

Procedia PDF Downloads 186
581 Numerical Investigation into Capture Efficiency of Fibrous Filters

Authors: Jayotpaul Chaudhuri, Lutz Goedeke, Torsten Hallenga, Peter Ehrhard

Abstract:

Purification of gases from aerosols or airborne particles via filters is widely applied in the industry and in our daily lives. This separation especially in the micron and submicron size range is a necessary step to protect the environment and human health. Fibrous filters are often employed due to their low cost and high efficiency. For designing any filter the two most important performance parameters are capture efficiency and pressure drop. Since the capture efficiency is directly proportional to the pressure drop which leads to higher operating costs, a detailed investigation of the separation mechanism is required to optimize the filter designing, i.e., to have a high capture efficiency with a lower pressure drop. Therefore a two-dimensional flow simulation around a single fiber using Ansys CFX and Matlab is used to get insight into the separation process. Instead of simulating a solid fiber, the present Ansys CFX model uses a fictitious domain approach for the fiber by implementing a momentum loss model. This approach has been chosen to avoid creating a new mesh for different fiber sizes, thereby saving time and effort for re-meshing. In a first step, only the flow of the continuous fluid around the fiber is simulated in Ansys CFX and the flow field data is extracted and imported into Matlab and the particle trajectory is calculated in a Matlab routine. This calculation is a Lagrangian, one way coupled approach for particles with all relevant forces acting on it. The key parameters for the simulation in both Ansys CFX and Matlab are the porosity ε, the diameter ratio of particle and fiber D, the fluid Reynolds number Re, the Reynolds particle number Rep, the Stokes number St, the Froude number Fr and the density ratio of fluid and particle ρf/ρp. The simulation results were then compared to the single fiber theory from the literature.

Keywords: BBO-equation, capture efficiency, CFX, Matlab, fibrous filter, particle trajectory

Procedia PDF Downloads 188
580 Quality Assessment of New Zealand Mānuka Honeys Using Hyperspectral Imaging Combined with Deep 1D-Convolutional Neural Networks

Authors: Hien Thi Dieu Truong, Mahmoud Al-Sarayreh, Pullanagari Reddy, Marlon M. Reis, Richard Archer

Abstract:

New Zealand mānuka honey is a honeybee product derived mainly from Leptospermum scoparium nectar. The potent antibacterial activity of mānuka honey derives principally from methylglyoxal (MGO), in addition to the hydrogen peroxide and other lesser activities present in all honey. MGO is formed from dihydroxyacetone (DHA) unique to L. scoparium nectar. Mānuka honey also has an idiosyncratic phenolic profile that is useful as a chemical maker. Authentic mānuka honey is highly valuable, but almost all honey is formed from natural mixtures of nectars harvested by a hive over a time period. Once diluted by other nectars, mānuka honey irrevocably loses value. We aimed to apply hyperspectral imaging to honey frames before bulk extraction to minimise the dilution of genuine mānuka by other honey and ensure authenticity at the source. This technology is non-destructive and suitable for an industrial setting. Chemometrics using linear Partial Least Squares (PLS) and Support Vector Machine (SVM) showed limited efficacy in interpreting chemical footprints due to large non-linear relationships between predictor and predictand in a large sample set, likely due to honey quality variability across geographic regions. Therefore, an advanced modelling approach, one-dimensional convolutional neural networks (1D-CNN), was investigated for analysing hyperspectral data for extraction of biochemical information from honey. The 1D-CNN model showed superior prediction of honey quality (R² = 0.73, RMSE = 2.346, RPD= 2.56) to PLS (R² = 0.66, RMSE = 2.607, RPD= 1.91) and SVM (R² = 0.67, RMSE = 2.559, RPD=1.98). Classification of mono-floral manuka honey from multi-floral and non-manuka honey exceeded 90% accuracy for all models tried. Overall, this study reveals the potential of HSI and deep learning modelling for automating the evaluation of honey quality in frames.

Keywords: mānuka honey, quality, purity, potency, deep learning, 1D-CNN, chemometrics

Procedia PDF Downloads 117
579 LTE Modelling of a DC Arc Ignition on Cold Electrodes

Authors: O. Ojeda Mena, Y. Cressault, P. Teulet, J. P. Gonnet, D. F. N. Santos, MD. Cunha, M. S. Benilov

Abstract:

The assumption of plasma in local thermal equilibrium (LTE) is commonly used to perform electric arc simulations for industrial applications. This assumption allows to model the arc using a set of magneto-hydromagnetic equations that can be solved with a computational fluid dynamic code. However, the LTE description is only valid in the arc column, whereas in the regions close to the electrodes the plasma deviates from the LTE state. The importance of these near-electrode regions is non-trivial since they define the energy and current transfer between the arc and the electrodes. Therefore, any accurate modelling of the arc must include a good description of the arc-electrode phenomena. Due to the modelling complexity and computational cost of solving the near-electrode layers, a simplified description of the arc-electrode interaction was developed in a previous work to study a steady high-pressure arc discharge, where the near-electrode regions are introduced at the interface between arc and electrode as boundary conditions. The present work proposes a similar approach to simulate the arc ignition in a free-burning arc configuration following an LTE description of the plasma. To obtain the transient evolution of the arc characteristics, appropriate boundary conditions for both the near-cathode and the near-anode regions are used based on recent publications. The arc-cathode interaction is modeled using a non-linear surface heating approach considering the secondary electron emission. On the other hand, the interaction between the arc and the anode is taken into account by means of the heating voltage approach. From the numerical modelling, three main stages can be identified during the arc ignition. Initially, a glow discharge is observed, where the cold non-thermionic cathode is uniformly heated at its surface and the near-cathode voltage drop is in the order of a few hundred volts. Next, a spot with high temperature is formed at the cathode tip followed by a sudden decrease of the near-cathode voltage drop, marking the glow-to-arc discharge transition. During this stage, the LTE plasma also presents an important increase of the temperature in the region adjacent to the hot spot. Finally, the near-cathode voltage drop stabilizes at a few volts and both the electrode and plasma temperatures reach the steady solution. The results after some seconds are similar to those presented for thermionic cathodes.

Keywords: arc-electrode interaction, thermal plasmas, electric arc simulation, cold electrodes

Procedia PDF Downloads 104
578 Semi-Automatic Segmentation of Mitochondria on Transmission Electron Microscopy Images Using Live-Wire and Surface Dragging Methods

Authors: Mahdieh Farzin Asanjan, Erkan Unal Mumcuoglu

Abstract:

Mitochondria are cytoplasmic organelles of the cell, which have a significant role in the variety of cellular metabolic functions. Mitochondria act as the power plants of the cell and are surrounded by two membranes. Significant morphological alterations are often due to changes in mitochondrial functions. A powerful technique in order to study the three-dimensional (3D) structure of mitochondria and its alterations in disease states is Electron microscope tomography. Detection of mitochondria in electron microscopy images due to the presence of various subcellular structures and imaging artifacts is a challenging problem. Another challenge is that each image typically contains more than one mitochondrion. Hand segmentation of mitochondria is tedious and time-consuming and also special knowledge about the mitochondria is needed. Fully automatic segmentation methods lead to over-segmentation and mitochondria are not segmented properly. Therefore, semi-automatic segmentation methods with minimum manual effort are required to edit the results of fully automatic segmentation methods. Here two editing tools were implemented by applying spline surface dragging and interactive live-wire segmentation tools. These editing tools were applied separately to the results of fully automatic segmentation. 3D extension of these tools was also studied and tested. Dice coefficients of 2D and 3D for surface dragging using splines were 0.93 and 0.92. This metric for 2D and 3D for live-wire method were 0.94 and 0.91 respectively. The root mean square symmetric surface distance values of 2D and 3D for surface dragging was measured as 0.69, 0.93. The same metrics for live-wire tool were 0.60 and 2.11. Comparing the results of these editing tools with the results of automatic segmentation method, it shows that these editing tools, led to better results and these results were more similar to ground truth image but the required time was higher than hand-segmentation time

Keywords: medical image segmentation, semi-automatic methods, transmission electron microscopy, surface dragging using splines, live-wire

Procedia PDF Downloads 149
577 3-D Modeling of Particle Size Reduction from Micro to Nano Scale Using Finite Difference Method

Authors: Himanshu Singh, Rishi Kant, Shantanu Bhattacharya

Abstract:

This paper adopts a top-down approach for mathematical modeling to predict the size reduction from micro to nano-scale through persistent etching. The process is simulated using a finite difference approach. Previously, various researchers have simulated the etching process for 1-D and 2-D substrates. It consists of two processes: 1) Convection-Diffusion in the etchant domain; 2) Chemical reaction at the surface of the particle. Since the process requires analysis along moving boundary, partial differential equations involved cannot be solved using conventional methods. In 1-D, this problem is very similar to Stefan's problem of moving ice-water boundary. A fixed grid method using finite volume method is very popular for modelling of etching on a one and two dimensional substrate. Other popular approaches include moving grid method and level set method. In this method, finite difference method was used to discretize the spherical diffusion equation. Due to symmetrical distribution of etchant, the angular terms in the equation can be neglected. Concentration is assumed to be constant at the outer boundary. At the particle boundary, the concentration of the etchant is assumed to be zero since the rate of reaction is much faster than rate of diffusion. The rate of reaction is proportional to the velocity of the moving boundary of the particle. Modelling of the above reaction was carried out using Matlab. The initial particle size was taken to be 50 microns. The density, molecular weight and diffusion coefficient of the substrate were taken as 2.1 gm/cm3, 60 and 10-5 cm2/s respectively. The etch-rate was found to decline initially and it gradually became constant at 0.02µ/s (1.2µ/min). The concentration profile was plotted along with space at different time intervals. Initially, a sudden drop is observed at the particle boundary due to high-etch rate. This change becomes more gradual with time due to declination of etch rate.

Keywords: particle size reduction, micromixer, FDM modelling, wet etching

Procedia PDF Downloads 415
576 Climate Change Adaptation Interventions in Agriculture and Sustainable Development through South-South Cooperation in Sub-Saharan Africa

Authors: Nuhu Mohammed Gali, Kenichi Matsui

Abstract:

Climate change poses a significant threat to agriculture and food security in Africa. The UNFCC recognized the need to address climate change adaptation in the broader context of sustainable development. African countries have initiated a governance system for adapting and responding to climate change in their Nationally Determined Contributions (NDCs). Despite the implementation limitations, Africa’s adaptation initiatives highlight the need to strengthen and expand adaptation responses. This paper looks at the extent to which South-South cooperation facilitates the implementation of adaptation actions between nations for agriculture and sustainable development. We conducted a literature review and content analysis of reports prepared by international organizations, reflecting the diversity of adaptation activities taking place in Sub-Saharan Africa. Our analysis of the connection between adaptation and nationally determined contributions (NDCs) showed that climate actions are mainstreamed into sustainable development. The NDCs in many countries on climate change adaptation action for agriculture aimed to strengthen the resilience of the poor. We found that climate-smart agriculture is the core of many countries target to end hunger. We revealed that South-South Cooperation, in terms of capacity, technology, and financial support, can help countries to achieve their climate action priorities and the Sustainable Development Goals (SDGs). We found that inadequate policy and regulatory frameworks between countries, differences in development priorities and strategies, poor communication, inadequate coordination, and the lack of local engagement and advocacy are some key barriers to South-South Cooperation in Africa. We recommend a multi-dimensional partnership, provisionoffinancialresources, systemic approach for coordination and engagement to promote and achieve the potential of SSC in Africa.

Keywords: climate change, adaptation, food security, sustainable development goals

Procedia PDF Downloads 104
575 Comparative Assessment of Geocell and Geogrid Reinforcement for Flexible Pavement: Numerical Parametric Study

Authors: Anjana R. Menon, Anjana Bhasi

Abstract:

Development of highways and railways play crucial role in a nation’s economic growth. While rigid concrete pavements are durable with high load bearing characteristics, growing economies mostly rely on flexible pavements which are easier in construction and more economical. The strength of flexible pavement is based on the strength of subgrade and load distribution characteristics of intermediate granular layers. In this scenario, to simultaneously meet economy and strength criteria, it is imperative to strengthen and stabilize the load transferring layers, namely subbase and base. Geosynthetic reinforcement in planar and cellular forms have been proven effective in improving soil stiffness and providing a stable load transfer platform. Studies have proven the relative superiority of cellular form-geocells over planar geosynthetic forms like geogrid, owing to the additional confinement of infill material and pocket effect arising from vertical deformation. Hence, the present study investigates the efficiency of geocells over single/multiple layer geogrid reinforcements by a series of three-dimensional model analyses of a flexible pavement section under a standard repetitive wheel load. The stress transfer mechanism and deformation profiles under various reinforcement configurations are also studied. Geocell reinforcement is observed to take up a higher proportion of stress caused by the traffic loads compared to single and double-layer geogrid reinforcements. The efficiency of single geogrid reinforcement reduces with an increase in embedment depth. The contribution of lower geogrid is insignificant in the case of the double-geogrid reinforced system.

Keywords: Geocell, Geogrid, Flexible Pavement, Repetitive Wheel Load, Numerical Analysis

Procedia PDF Downloads 62
574 Validation of the Formula for Air Attenuation Coefficient for Acoustic Scale Models

Authors: Katarzyna Baruch, Agata Szelag, Aleksandra Majchrzak, Tadeusz Kamisinski

Abstract:

Methodology of measurement of sound absorption coefficient in scaled models is based on the ISO 354 standard. The measurement is realised indirectly - the coefficient is calculated from the reverberation time of an empty chamber as well as a chamber with an inserted sample. It is crucial to maintain the atmospheric conditions stable during both measurements. Possible differences may be amended basing on the formulas for atmospheric attenuation coefficient α given in ISO 9613-1. Model studies require scaling particular factors in compliance with specified characteristic numbers. For absorption coefficient measurement, these are for example: frequency range or the value of attenuation coefficient m. Thanks to the possibilities of modern electroacoustic transducers, it is no longer a problem to scale the frequencies which have to be proportionally higher. However, it may be problematic to reduce values of the attenuation coefficient. It is practically obtained by drying the air down to a defined relative humidity. Despite the change of frequency range and relative humidity of the air, ISO 9613-1 standard still allows the calculation of the amendment for little differences of the atmospheric conditions in the chamber during measurements. The paper discusses a number of theoretical analyses and experimental measurements performed in order to obtain consistency between the values of attenuation coefficient calculated from the formulas given in the standard and by measurement. The authors performed measurements of reverberation time in a chamber made in a 1/8 scale in a corresponding frequency range, i.e. 800 Hz - 40 kHz and in different values of the relative air humidity (40% 5%). Based on the measurements, empirical values of attenuation coefficient were calculated and compared with theoretical ones. In general, the values correspond with each other, but for high frequencies and low values of relative air humidity the differences are significant. Those discrepancies may directly influence the values of measured sound absorption coefficient and cause errors. Therefore, the authors made an effort to determine an amendment minimizing described inaccuracy.

Keywords: air absorption correction, attenuation coefficient, dimensional analysis, model study, scaled modelling

Procedia PDF Downloads 404
573 Electrospray Plume Characterisation of a Single Source Cone-Jet for Micro-Electronic Cooling

Authors: M. J. Gibbons, A. J. Robinson

Abstract:

Increasing expectations on small form factor electronics to be more compact while increasing performance has driven conventional cooling technologies to a thermal management threshold. An emerging solution to this problem is electrospray (ES) cooling. ES cooling enables two phase cooling by utilising Coulomb forces for energy efficient fluid atomization. Generated charged droplets are accelerated to the grounded target surface by the applied electric field and surrounding gravitational force. While in transit the like charged droplets enable plume dispersion and inhibit droplet coalescence. If the electric field is increased in the cone-jet regime, a subsequent increase in the plume spray angle has been shown. Droplet segregation in the spray plume has been observed, with primary droplets in the plume core and satellite droplets positioned on the periphery of the plume. This segregation is facilitated by inertial and electrostatic effects. This result has been corroborated by numerous authors. These satellite droplets are usually more densely charged and move at a lower relative velocity to that of the spray core due to the radial decay of the electric field. Previous experimental research by Gomez and Tang has shown that the number of droplets deposited on the periphery can be up to twice that of the spray core. This result has been substantiated by a numerical models derived by Wilhelm et al., Oh et al. and Yang et al. Yang et al. showed from their numerical model, that by varying the extractor potential the dispersion radius of the plume also varies proportionally. This research aims to investigate this dispersion density and the role it plays in the local heat transfer coefficient profile (h) of ES cooling. This will be carried out for different extractor – target separation heights (H2), working fluid flow rates (Q), and extractor applied potential (V2). The plume dispersion will be recorded by spraying a 25 µm thick, joule heated steel foil and by recording the thermal footprint of the ES plume using a Flir A-40 thermal imaging camera. The recorded results will then be analysed by in-house developed MATLAB code.

Keywords: electronic cooling, electrospray, electrospray plume dispersion, spray cooling

Procedia PDF Downloads 376
572 Building an Arithmetic Model to Assess Visual Consistency in Townscape

Authors: Dheyaa Hussein, Peter Armstrong

Abstract:

The phenomenon of visual disorder is prominent in contemporary townscapes. This paper provides a theoretical framework for the assessment of visual consistency in townscape in order to achieve more favourable outcomes for users. In this paper, visual consistency refers to the amount of similarity between adjacent components of townscape. The paper investigates parameters which relate to visual consistency in townscape, explores the relationships between them and highlights their significance. The paper uses arithmetic methods from outside the domain of urban design to enable the establishment of an objective approach of assessment which considers subjective indicators including users’ preferences. These methods involve the standard of deviation, colour distance and the distance between points. The paper identifies urban space as a key representative of the visual parameters of townscape. It focuses on its two components, geometry and colour in the evaluation of the visual consistency of townscape. Accordingly, this article proposes four measurements. The first quantifies the number of vertices, which are points in the three-dimensional space that are connected, by lines, to represent the appearance of elements. The second evaluates the visual surroundings of urban space through assessing the location of their vertices. The last two measurements calculate the visual similarity in both vertices and colour in townscape by the calculation of their variation using methods including standard of deviation and colour difference. The proposed quantitative assessment is based on users’ preferences towards these measurements. The paper offers a theoretical basis for a practical tool which can alter the current understanding of architectural form and its application in urban space. This tool is currently under development. The proposed method underpins expert subjective assessment and permits the establishment of a unified framework which adds to creativity by the achievement of a higher level of consistency and satisfaction among the citizens of evolving townscapes.

Keywords: townscape, urban design, visual assessment, visual consistency

Procedia PDF Downloads 296
571 Sensitivity and Uncertainty Analysis of One Dimensional Shape Memory Alloy Constitutive Models

Authors: A. B. M. Rezaul Islam, Ernur Karadogan

Abstract:

Shape memory alloys (SMAs) are known for their shape memory effect and pseudoelasticity behavior. Their thermomechanical behaviors are modeled by numerous researchers using microscopic thermodynamic and macroscopic phenomenological point of view. Tanaka, Liang-Rogers and Ivshin-Pence models are some of the most popular SMA macroscopic phenomenological constitutive models. They describe SMA behavior in terms of stress, strain and temperature. These models involve material parameters and they have associated uncertainty present in them. At different operating temperatures, the uncertainty propagates to the output when the material is subjected to loading followed by unloading. The propagation of uncertainty while utilizing these models in real-life application can result in performance discrepancies or failure at extreme conditions. To resolve this, we used probabilistic approach to perform the sensitivity and uncertainty analysis of Tanaka, Liang-Rogers, and Ivshin-Pence models. Sobol and extended Fourier Amplitude Sensitivity Testing (eFAST) methods have been used to perform the sensitivity analysis for simulated isothermal loading/unloading at various operating temperatures. As per the results, it is evident that the models vary due to the change in operating temperature and loading condition. The average and stress-dependent sensitivity indices present the most significant parameters at several temperatures. This work highlights the sensitivity and uncertainty analysis results and shows comparison of them at different temperatures and loading conditions for all these models. The analysis presented will aid in designing engineering applications by eliminating the probability of model failure due to the uncertainty in the input parameters. Thus, it is recommended to have a proper understanding of sensitive parameters and the uncertainty propagation at several operating temperatures and loading conditions as per Tanaka, Liang-Rogers, and Ivshin-Pence model.

Keywords: constitutive models, FAST sensitivity analysis, sensitivity analysis, sobol, shape memory alloy, uncertainty analysis

Procedia PDF Downloads 124
570 DNA Polymorphism Studies of β-Lactoglobulin Gene in Native Saudi Goat Breeds

Authors: Amr A. El Hanafy, Muhammad I. Qureshi, Jamal Sabir, Mohamed Mutawakil, Mohamed M. Ahmed, Hassan El Ashmaoui, Hassan Ramadan, Mohamed Abou-Alsoud, Mahmoud Abdel Sadek

Abstract:

β-Lactoglobulin (β-LG) is the dominant non-casein whey protein found in bovine milk and of most ruminants. The amino acid sequence of β-LG along with its 3-dimensional structure illustrates linkage with the lipocalin superfamily. Preliminary studies in goats indicated that milk yield can be influenced by polymorphism in genes coding for whey proteins. The aim of this study is to identify and evaluate the incidence of functional polymorphisms in the exonic and intronic portions of β-LG gene in native Saudi goat breeds (Ardi, Habsi, and Harri). Blood samples were collected from 300 animals (100 for each breed) and genomic DNA was extracted using QIAamp DNA extraction Kit. A fragment of the β-LG gene from exon 7 to 3’ flanking region was amplified with pairs of specific primers. Subsequent digestion with Sac II restriction endonuclease revealed two alleles (A and B) and three different banding patterns or genotypes i.e. AA, AB and BB. The statistical analysis showed that β-LG AA genotype had higher milk yield than β-LG AB and β-LG BB genotypes. Nucleotide sequencing of the selected β-LG fragments was done and submitted to GenBank NCBI (Accession No. KJ544248, KJ588275, KJ588276, KJ783455, KJ783456 and KJ874959). Two already established SNPs in exon 7 (+4601 and +4603) and one fresh SNP in the 3’ UTR region were detected in the β-LG fragments with designated AA genotype. The polymorphisms in exon 7 did not produce any amino acid change. Phylogenetic analysis on the basis of nucleotide sequences of native Saudi goats indicated evolutional similarity with the GenBank reference sequences of goat, Bubalus bubalis and Bos taurus.

Keywords: β-Lactoglobulin, Saudi goats, PCR-RFLP, functional polymorphism, nucleotide sequencing, phylogenetic analysis

Procedia PDF Downloads 477
569 Design Approach of the Turbocompressor for Aerospace Industry

Authors: Halil Baris Cit, Mert Durmaz

Abstract:

Subsequent to the design of the compact centrifugal compressor, which is specifically intended to be used in aviation platforms, the process has been evaluated within the context of this study. A trade-off study matrix for future studies has been formed after making comparison between the design and the previous studies taking part in literature. While the power consumption of the designed compressor will be approximately 25 kW, the working fluid will be refrigerant. Properties such as thermodynamic properties and Global Warmin Potential(GWP)-Ozone Depletion Potential(ODP) Values of the fluid have been taken into consideration during the selection process of the refrigerant. Concepts NREC and ANSYS Vista CCD software have been used in the part of conceptual design, and R1233ZD has been selected as the refrigerant. Real-gas Computational Fluid Dynamic(CFD) analysis has been carried out with different cubic equations of state in the ANSYS CFX solver so as to figure out the most suitable solution method. These equations are named as “The Redlich Kwong”, “Soave Redlich Kwong”, “Augnier Redlick Kwong,” and “Peng Robinson.” By being used the mentioned solution equations in the same compressor configuration, analysis also have been carried out with two gases having different characteristics. As a result of the 12 analysis carried out with three different refrigerants—R11, R134A, and R1233zd—and four different solution equations mentioned above, the most accurate solution method has been selected by comparing the densities of the gases at different pressure and temperature points. The results have been analyzed within two titles following to the completion of the design with the selected equation. The first one is a trade-off study matrix presenting a comparison regarding the compact centrifugal compressor operating with the refrigerant to be designed. This comparison is between some dimensionless and dimensional parameters determined before the design and their values in the literature. Second one will show the differences between the actual density and the density in the design software in each real gas analysis method, along with the effects of it on the design.

Keywords: turbocompressor, refrigerant, aviation, aerospace compressor

Procedia PDF Downloads 74
568 Numerical Studies on Bypass Thrust Augmentation Using Convective Heat Transfer in Turbofan Engine

Authors: R. Adwaith, J. Gopinath, Vasantha Kohila B., R. Chandru, Arul Prakash R.

Abstract:

The turbofan engine is a type of air breathing engine that is widely used in aircraft propulsion produces thrust mainly from the mass-flow of air bypassing the engine core. The present research has developed an effective method numerically by increasing the thrust generated from the bypass air. This thrust increase is brought about by heating the walls of the bypass valve from the combustion chamber using convective heat transfer method. It is achieved computationally by the use external heat to enhance the velocity of bypass air of turbofan engines. The bypass valves are either heated externally using multicell tube resistor which convert electricity generated by dynamos into heat or heat is transferred from the combustion chamber. This increases the temperature of the flow in the valves and thereby increase the velocity of the flow that enters the nozzle of the engine. As a result, mass-flow of air passing the core engine for producing more thrust can be significantly reduced thereby saving considerable amount of Jet fuel. Numerical analysis has been carried out on a scaled down version of a typical turbofan bypass valve, where the valve wall temperature has been increased to 700 Kelvin. It is observed from the analysis that, the exit velocity contributing to thrust has significantly increased by 10 % due to the heating of by-pass valve. The degree of optimum increase in the temperature, and the corresponding effect in the increase of jet velocity is calculated to determine the operating temperature range for efficient increase in velocity. The technique used in the research increases the thrust by using heated by-pass air without extracting much work from the fuel and thus improve the efficiency of existing turbofan engines. Dimensional analysis has been carried to prove the accuracy of the results obtained numerically.

Keywords: turbofan engine, bypass valve, multi-cell tube, convective heat transfer, thrust

Procedia PDF Downloads 341
567 Defence Ethics : A Performance Measurement Framework for the Defence Ethics Program

Authors: Allyson Dale, Max Hlywa

Abstract:

The Canadian public expects the highest moral standards from Canadian Armed Forces (CAF) members and Department of National Defence (DND) employees. The Chief, Professional Conduct and Culture (CPCC) stood up in April 2021 with the mission of ensuring that the defence culture and members’ conduct are aligned with the ethical principles and values that the organization aspires towards. The Defence Ethics Program (DEP), which stood up in 1997, is a values-based ethics program for individuals and organizations within the DND/CAF and now falls under CPCC. The DEP is divided into five key functional areas, including policy, communications, collaboration, training and education, and advice and guidance. The main focus of the DEP is to foster an ethical culture within defence so that members and organizations perform to the highest ethical standards. The measurement of organizational ethics is often complex and challenging. In order to monitor whether the DEP is achieving its intended outcomes, a performance measurement framework (PMF) was developed using the Director General Military Personnel Research and Analysis (DGMPRA) PMF development process. This evidence-based process is based on subject-matter expertise from the defence team. The goal of this presentation is to describe each stage of the DGMPRA PMF development process and to present and discuss the products of the DEP PMF (e.g., logic model). Specifically, first, a strategic framework was developed to provide a high-level overview of the strategic objectives, mission, and vision of the DEP. Next, Key Performance Questions were created based on the objectives in the strategic framework. A logic model detailing the activities, outputs (what is produced by the program activities), and intended outcomes of the program were developed to demonstrate how the program works. Finally, Key Performance Indicators were developed based on both the intended outcomes in the logic model and the Key Performance Questions in order to monitor program effectiveness. The Key Performance Indicators measure aspects of organizational ethics such as ethical conduct and decision-making, DEP collaborations, and knowledge and awareness of the Defence Ethics Code while leveraging ethics-related items from multiple DGMPRA surveys where appropriate.

Keywords: defence ethics, ethical culture, organizational performance, performance measurement framework

Procedia PDF Downloads 83