Search results for: single strap joint
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5580

Search results for: single strap joint

720 Coulomb-Explosion Driven Proton Focusing in an Arched CH Target

Authors: W. Q. Wang, Y. Yin, D. B. Zou, T. P. Yu, J. M. Ouyang, F. Q. Shao

Abstract:

High-energy-density state, i.e., matter and radiation at energy densities in excess of 10^11 J/m^3, is related to material, nuclear physics, astrophysics, and geophysics. Laser-driven particle beams are better suited to heat the matter as a trigger due to their unique properties of ultrashort duration and low emittance. Compared to X-ray and electron sources, it is easier to generate uniformly heated large-volume material for the proton and ion beams because of highly localized energy deposition. With the construction of state-of-art high power laser facilities, creating of extremely conditions of high-temperature and high-density in laboratories becomes possible. It has been demonstrated that on a picosecond time scale the solid density material can be isochorically heated to over 20 eV by the ultrafast proton beam generated from spherically shaped targets. For the above-mentioned technique, the proton energy density plays a crucial role in the formation of warm dense matter states. Recently, several methods have devoted to realize the focusing of the accelerated protons, involving externally exerted static-fields or specially designed targets interacting with a single or multi-pile laser pulses. In previous works, two co-propagating or opposite direction laser pulses are employed to strike a submicron plasma-shell. However, ultra-high pulse intensities, accurately temporal synchronization and undesirable transverse instabilities for a long time are still intractable for currently experimental implementations. A mechanism of the focusing of laser-driven proton beams from two-ion-species arched targets is investigated by multi-dimensional particle-in-cell simulations. When an intense linearly-polarized laser pulse impinges on the thin arched target, all electrons are completely evacuated, leading to a Coulomb-explosive electric-field mostly originated from the heavier carbon ions. The lighter protons in the moving reference frame by the ionic sound speed will be accelerated and effectively focused because of this radially isotropic field. At a 2.42×10^21 W/cm^2 laser intensity, a ballistic proton bunch with its energy-density as high as 2.15×10^17 J/m^3 is produced, and the highest proton energy and the focusing position agree well with that from the theory.

Keywords: Coulomb explosion, focusing, high-energy-density, ion acceleration

Procedia PDF Downloads 344
719 An Integrated Power Generation System Design Developed between Solar Energy-Assisted Dual Absorption Cycles

Authors: Asli Tiktas, Huseyin Gunerhan, Arif Hepbasli

Abstract:

Solar energy, with its abundant and clean features, is one of the prominent renewable energy sources in multigeneration energy systems where various outputs, especially power generation, are produced together. In the literature, concentrated solar energy systems, which are an expensive technology, are mostly used in solar power plants where medium-high capacity production outputs are achieved. In addition, although different methods have been developed and proposed for solar energy-supported integrated power generation systems by different investigators, absorption technology, which is one of the key points of the present study, has been used extensively in cooling systems in these studies. Unlike these common uses mentioned in the literature, this study designs a system in which a flat plate solar collector (FPSC), Rankine cycle, absorption heat transformer (AHT), and cooling systems (ACS) are integrated. The system proposed within the scope of this study aims to produce medium-high-capacity electricity, heating, and cooling outputs using a technique different from the literature, with lower production costs than existing systems. With the proposed integrated system design, the average production costs based on electricity, heating, and cooling load production for similar scale systems are 5-10% of the average production costs of 0.685 USD/kWh, 0.247 USD/kWh, and 0.342 USD/kWh. In the proposed integrated system design, this will be achieved by increasing the outlet temperature of the AHT and FPSC system first, expanding the high-temperature steam coming out of the absorber of the AHT system in the turbine up to the condenser temperature of the ACS system, and next directly integrating it into the evaporator of this system and then completing the AHT cycle. Through this proposed system, heating and cooling will be carried out by completing the AHT and ACS cycles, respectively, while power generation will be provided because of the expansion of the turbine. Using only a single generator in the production of these three outputs together, the costs of additional boilers and the need for a heat source are also saved. In order to demonstrate that the system proposed in this study offers a more optimum solution, the techno-economic parameters obtained based on energy, exergy, economic, and environmental analysis were compared with the parameters of similar scale systems in the literature. The design parameters of the proposed system were determined through a parametric optimization study to exceed the maximum efficiency and effectiveness and reduce the production cost rate values of the compared systems.

Keywords: solar energy, absorption technology, Rankine cycle, multigeneration energy system

Procedia PDF Downloads 55
718 Study of the Hydrodynamic of Electrochemical Ion Pumping for Lithium Recovery

Authors: Maria Sofia Palagonia, Doriano Brogioli, Fabio La Mantia

Abstract:

In the last decade, lithium has become an important raw material in various sectors, in particular for rechargeable batteries. Its production is expected to grow more and more in the future, especially for mobile energy storage and electromobility. Until now it is mostly produced by the evaporation of water from salt lakes, which led to a huge water consumption, a large amount of waste produced and a strong environmental impact. A new, clean and faster electrochemical technique to recover lithium has been recently proposed: electrochemical ion pumping. It consists in capturing lithium ions from a feed solution by intercalation in a lithium-selective material, followed by releasing them into a recovery solution; both steps are driven by the passage of a current. In this work, a new configuration of the electrochemical cell is presented, used to study and optimize the process of the intercalation of lithium ions through the hydrodynamic condition. Lithium Manganese Oxide (LiMn₂O₄) was used as a cathode to intercalate lithium ions selectively during the reduction, while Nickel Hexacyano Ferrate (NiHCF), used as an anode, releases positive ion. The effect of hydrodynamics on the process has been studied by conducting the experiments at various fluxes of the electrolyte through the electrodes, in terms of charge circulated through the cell, captured lithium per unit mass of material and overvoltage. The result shows that flowing the electrolyte inside the cell improves the lithium capture, in particular at low lithium concentration. Indeed, in Atacama feed solution, at 40 mM of lithium, the amount of lithium captured does not increase considerably with the flux of the electrolyte. Instead, when the concentration of the lithium ions is 5 mM, the amount of captured lithium in a single capture cycle increases by increasing the flux, thus leading to the conclusion that the slowest step in the process is the transport of the lithium ion in the liquid phase. Furthermore, an influence of the concentration of other cations in solution on the process performance was observed. In particular, the capturing of the lithium using a different concentration of NaCl together with 5 mM of LiCl was performed, and the results show that the presence of NaCl limits the amount of the captured lithium. Further studies can be performed in order to understand why the full capacity of the material is not reached at the highest flow rate. This is probably due to the porous structure of the material since the liquid phase is likely not affected by the convection flow inside the pores. This work proves that electrochemical ion pumping, with a suitable hydrodynamic design, enables the recovery of lithium from feed solutions at the lower concentration than the sources that are currently exploited, down to 1 mM.

Keywords: desalination battery, electrochemical ion pumping, hydrodynamic, lithium

Procedia PDF Downloads 206
717 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer

Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi

Abstract:

Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.

Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales

Procedia PDF Downloads 123
716 Integrating Reactive Chlorine Species Generation with H2 Evolution in a Multifunctional Photoelectrochemical System for Low Operational Carbon Emissions Saline Sewage Treatment

Authors: Zexiao Zheng, Irene M. C. Lo

Abstract:

Organic pollutants, ammonia, and bacteria are major contaminants in sewage, which may adversely impact ecosystems without proper treatment. Conventional wastewater treatment plants (WWTPs) are operated to remove these contaminants from sewage but suffer from high carbon emissions and are powerless to remove emerging organic pollutants (EOPs). Herein, we have developed a low operational carbon emissions multifunctional photoelectrochemical (PEC) system for saline sewage treatment to simultaneously remove organic compounds, ammonia, and bacteria, coupled with H2 evolution. A reduced BiVO4 (r-BiVO4) with improved PEC properties due to the construction of oxygen vacancies and V4+ species was developed for the multifunctional PEC system. The PEC/r-BiVO4 process could treat saline sewage to meet local WWTPs’ discharge standard in 40 minutes at 2.0 V vs. Ag/AgCl and completely degrade carbamazepine (one of the EOPs), coupled with significant evolution of H2. A remarkable reduction in operational carbon emissions was achieved by the PEC/r-BiVO4 process compared with large-scale WWTPs, attributed to the restrained direct carbon emissions from the generation of greenhouse gases. Mechanistic investigation revealed that the PEC system could activate chloride ions in sewage to generate reactive chlorine species and facilitate •OH production, promoting contaminants removal. The PEC system exhibited operational feasibility at different pH and total suspended solids concentrations and has outstanding reusability and stability, confirming its promising practical potential. The study combined the simultaneous removal of three major contaminants from saline sewage and H2 evolution in a single PEC process, demonstrating a viable approach to supplementing and extending the existing wastewater treatment technologies. The study generated profound insights into the in-situ activation of existing chloride ions in sewage for contaminants removal and offered fundamental theories for applying the PEC system in sewage remediation with low operational carbon emissions. The developed PEC system can fit well with the future needs of wastewater treatment because of the following features: (i) low operational carbon emissions, benefiting the carbon neutrality process; (ii) higher quality of the effluent due to the elimination of EOPs; (iii) chemical-free in the operation of sewage treatment; (iv) easy reuse and recycling without secondary pollution.

Keywords: contaminants removal, H2 evolution, multifunctional PEC system, operational carbon emissions, saline sewage treatment, r-BiVO4 photoanodes

Procedia PDF Downloads 155
715 A Machine Learning Approach for Efficient Resource Management in Construction Projects

Authors: Soheila Sadeghi

Abstract:

Construction projects are complex and often subject to significant cost overruns due to the multifaceted nature of the activities involved. Accurate cost estimation is crucial for effective budget planning and resource allocation. Traditional methods for predicting overruns often rely on expert judgment or analysis of historical data, which can be time-consuming, subjective, and may fail to consider important factors. However, with the increasing availability of data from construction projects, machine learning techniques can be leveraged to improve the accuracy of overrun predictions. This study applied machine learning algorithms to enhance the prediction of cost overruns in a case study of a construction project. The methodology involved the development and evaluation of two machine learning models: Random Forest and Neural Networks. Random Forest can handle high-dimensional data, capture complex relationships, and provide feature importance estimates. Neural Networks, particularly Deep Neural Networks (DNNs), are capable of automatically learning and modeling complex, non-linear relationships between input features and the target variable. These models can adapt to new data, reduce human bias, and uncover hidden patterns in the dataset. The findings of this study demonstrate that both Random Forest and Neural Networks can significantly improve the accuracy of cost overrun predictions compared to traditional methods. The Random Forest model also identified key cost drivers and risk factors, such as changes in the scope of work and delays in material delivery, which can inform better project risk management. However, the study acknowledges several limitations. First, the findings are based on a single construction project, which may limit the generalizability of the results to other projects or contexts. Second, the dataset, although comprehensive, may not capture all relevant factors influencing cost overruns, such as external economic conditions or political factors. Third, the study focuses primarily on cost overruns, while schedule overruns are not explicitly addressed. Future research should explore the application of machine learning techniques to a broader range of projects, incorporate additional data sources, and investigate the prediction of both cost and schedule overruns simultaneously.

Keywords: resource allocation, machine learning, optimization, data-driven decision-making, project management

Procedia PDF Downloads 36
714 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models

Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble

Abstract:

Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.

Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate

Procedia PDF Downloads 215
713 Epoxomicin Affects Proliferating Neural Progenitor Cells of Rat

Authors: Bahaa Eldin A. Fouda, Khaled N. Yossef, Mohamed Elhosseny, Ahmed Lotfy, Mohamed Salama, Mohamed Sobh

Abstract:

Developmental neurotoxicity (DNT) entails the toxic effects imparted by various chemicals on the brain during the early childhood period. As human brains are vulnerable during this period, various chemicals would have their maximum effects on brains during early childhood. Some toxicants have been confirmed to induce developmental toxic effects on CNS e.g. lead, however; most of the agents cannot be identified with certainty due the defective nature of predictive toxicology models used. A novel alternative method that can overcome most of the limitations of conventional techniques is the use of 3D neurospheres system. This in-vitro system can recapitulate most of the changes during the period of brain development making it an ideal model for predicting neurotoxic effects. In the present study, we verified the possible DNT of epoxomicin which is a naturally occurring selective proteasome inhibitor with anti-inflammatory activity. Rat neural progenitor cells were isolated from rat embryos (E14) extracted from placental tissue. The cortices were aseptically dissected out from the brains of the fetuses and the tissues were triturated by repeated passage through a fire-polished constricted Pasteur pipette. The dispersed tissues were allowed to settle for 3 min. The supernatant was, then, transferred to a fresh tube and centrifuged at 1,000 g for 5 min. The pellet was placed in Hank’s balanced salt solution cultured as free-floating neurospheres in proliferation medium. Two doses of epoxomicin (1µM and 10µM) were used in cultured neuropsheres for a period of 14 days. For proliferation analysis, spheres were cultured in proliferation medium. After 0, 4, 5, 11, and 14 days, sphere size was determined by software analyses. The diameter of each neurosphere was measured and exported to excel file further to statistical analysis. For viability analysis, trypsin-EDTA solution were added to neurospheres for 3 min to dissociate them into single cells suspension, then viability evaluated by the Trypan Blue exclusion test. Epoxomicin was found to affect proliferation and viability of neuropsheres, these effects were positively correlated to doses and progress of time. This study confirms the DNT effects of epoxomicin on 3D neurospheres model. The effects on proliferation suggest possible gross morphologic changes while the decrease in viability propose possible focal lesion on exposure to epoxomicin during early childhood.

Keywords: neural progentor cells, epoxomicin, neurosphere, medical and health sciences

Procedia PDF Downloads 425
712 Comparison of Phytochemicals in Grapes and Wine from Shenton Park Winery

Authors: Amanda Sheard, Garry Lee, Katherine Stockham

Abstract:

Introduction: Health benefits associated with wine consumption have been well documented; these include anticancer, anti-inflammatory, and cardiovascular protection. The majority of these health benefits have been linked to polyphenols found within wine and grapes. Once consumed polyphenols exhibit free radical quenching capabilities. Environmental factors such as rainfall, temperature, CO2 levels and sunlight exposure have been shown to affect the polyphenol content of grapes. The objective of this work was to evaluate the effect of growing conditions on the antioxidant capacity of grapes obtained from a single plot vineyard in Perth. This was achieved through the analysis of samples using; oxygen radical antioxidant capacity (ORAC), cellular antioxidant activity (CAA) in human red blood cells, ICP-MS and ICP-OES, total polyphenols (PP’s), and total flavonoid’s (FLa). The data obtained was compared to observed climate data. The 14 Selected Vitis Vinefera L. cultivars included Cabernet franc, Cabernet Sauvignon, Carnelian, Chardonnay, Grenache, Melbec, Merlot, Orange muscat, Rousanne, Sauvignon Blanc, Shiraz, Tempernillo, Verdelho, and Voignier. Results: Notable variation’s between cultivars included results ranging from 125 mg/100 g-350 mg/100 g for PP’s, 93 mg/100 g–300 mg/100 g for FLa, 13 mM T.E/kg–33 mM T.E/kg for ORAC and 0.3 mM Q.E/kg–27 mM Q.E/kg CAA were found between red and white grape cultivars. No correlation was found between CAA and the ORAC obtained in this study; except that white cultivars were consistently lower than red. ICP analysis showed that seeds contained the highest concentration of copper followed by skins and flesh of the grape. A positive correlation between copper and ORAC was found. The ORAC, PP’s, and FLa in red grapes were consistently higher than white grape cultivars; these findings were supported by literature values. Significance: The cellular antioxidant activities of white and red wine cultivars were used to compare the bioactivity of these grapes against the chemical ORAC measurement. The common method of antioxidant activity measurement is the chemical value from ORAC analysis; however this may not reflect the activity within the human body. Hence, the measurements were also carried out using the cellular antioxidant activity to perform a comparison. Additionally, the study explored the influence of weather systems such as El Niño and La Niña on the polyphenol content of Australian wine cultivars grown in Perth.

Keywords: oxygen radical antioxidant activity, cellular antioxidant activity, total polyphenols, total flavonoids, wine grapes, climate

Procedia PDF Downloads 287
711 Supply Chain Design: Criteria Considered in Decision Making Process

Authors: Lenka Krsnakova, Petr Jirsak

Abstract:

Prior research on facility location in supply chain is mostly focused on improvement of mathematical models. It is due to the fact that supply chain design has been for the long time the area of operational research that underscores mainly quantitative criteria. Qualitative criteria are still highly neglected within the supply chain design research. Facility location in the supply chain has become multi-criteria decision-making problem rather than single criteria decision due to changes of market conditions. Thus, both qualitative and quantitative criteria have to be included in the decision making process. The aim of this study is to emphasize the importance of qualitative criteria as key parameters of relevant mathematical models. We examine which criteria are taken into consideration when Czech companies decide about their facility location. A literature review on criteria being used in facility location decision making process creates a theoretical background for the study. The data collection was conducted through questionnaire survey. Questionnaire was sent to manufacturing and business companies of all sizes (small, medium and large enterprises) with the representation in the Czech Republic within following sectors: automotive, toys, clothing industry, electronics and pharmaceutical industry. Comparison of which criteria prevail in the current research and which are considered important by companies in the Czech Republic is made. Despite the number of articles focused on supply chain design, only minority of them consider qualitative criteria and rarely process supply chain design as a multi-criteria decision making problem. Preliminary results of the questionnaire survey outlines that companies in the Czech Republic see the qualitative criteria and their impact on facility location decision as crucial. Qualitative criteria as company strategy, quality of working environment or future development expectations are confirmed to be considered by Czech companies. This study confirms that the qualitative criteria can significantly influence whether a particular location could or could not be right place for a logistic facility. The research has two major limitations: researchers who focus on improving of mathematical models mostly do not mention criteria that enter the model. Czech supply chain managers selected important criteria from the group of 18 available criteria and assign them importance weights. It does not necessarily mean that these criteria were taken into consideration when the last facility location was chosen, but how they perceive that today. Since the study confirmed the necessity of future research on how qualitative criteria influence decision making process about facility location, the authors have already started in-depth interviews with participating companies to reveal how the inclusion of qualitative criteria into decision making process about facility location influence the company´s performance.

Keywords: criteria influencing facility location, Czech Republic, facility location decision-making, qualitative criteria

Procedia PDF Downloads 321
710 Association between Appearance Schemas and Personality

Authors: Berta Rodrigues Maia, Mariana Marques, Frederica Carvalho

Abstract:

Introduction: Personality traits play is related to many forms of psychological distress, such as body dissatisfaction. Aim: To explore the associations between appearance schemas and personality traits. Method: 494 Portuguese university students (80.2% females, and 99.2% single), with a mean age of 20.17 years old (SD = 1.77; range: 18-20), filled in the appearance schemas inventory-revised, the NEO personality inventory (a Portuguese short version), and the composite multidimensional perfectionism scale. Results: An independent-samples t-test was conducted to compare the scores in appearance schemas by sex, with a significant difference being found in self-evaluation salience scores [females (M = 37.99, SD = 7.82); males (M = 35.36, SD = 6.60); t (489) = -3.052, p = .002]. Finally, there was no significant difference in motivational salience scores, by sex [females (M = 27.67, SD = 4.84); males (M = 26.70, SD = 4.99); t (489) = -1.748, p = .081]. Having conducted correlations separately, by sex, self-evaluation salience was positively correlated with concern over mistakes (r = .27), doubts about actions (r = .35), and socially prescribed perfectionism (r = .23). moreover, for females, self-evaluation salience was positively correlated with concern over mistakes (r = .34), personal standards (r = .25), doubts about actions (r = .33), parental expectations (r = .24), parental criticism (r = .24), organization (r = .11), socially prescribed perfectionism (r = .31), self-oriented perfectionism (r = .32), and neuroticism (r = .33). concerning motivational salience, in the total sample (not separately, by sex), this scale/dimension significantly correlated with conscientiousness (r = . 18), personal standards (r = .23), socially prescribed perfectionism (r = . 10), and self-oriented perfectionism (r = .29). All correlations were significant at a level of significance of 0.01 (2-tailed), except for socially prescribed perfectionism. All the other correlations (with neuroticism, extroversion, openness, agreeableness, concern over mistakes, doubts about actions, parental expectations, and parental criticism) were not significant. Conclusions: Females seem to value more their self-appearance than males, and, in females, the salience of appearance in life seems to be associated with maladaptive perfectionism, as well as with adaptive perfectionism. In males, the salience of appearance was only related to adaptive perfectionism. These results seem to show that males are more concerned with their own standards regarding appearance, while for females, other's standards are also relevant. In females, the level of the salience of appearance in life seems to relate to the experience of feelings, such as anxiety and depression (neuroticism). The motivation to improve appearance seemed to be particularly related, in both sexes, to adaptive perfectionism (in a general way concerning more the personal standards). Longitudinal studies are needed to clarify the causality of the results. Acknowledgment: This study was carried out under the strategic project of the Centre for Philosophical and Humanistic Studies (CEFH) UID/FIL/00683/2019, funded by the Fundação para a Ciência e a Tecnologia (FCT).

Keywords: appearance schemas, personality traits, university students, sex

Procedia PDF Downloads 127
709 H2 Permeation Properties of a Catalytic Membrane Reactor in Methane Steam Reforming Reaction

Authors: M. Amanipour, J. Towfighi, E. Ganji Babakhani, M. Heidari

Abstract:

Cylindrical alumina microfiltration membrane (GMITM Corporation, inside diameter=9 mm, outside diameter=13 mm, length= 50 mm) with an average pore size of 0.5 micrometer and porosity of about 0.35 was used as the support for membrane reactor. This support was soaked in boehmite sols, and the mean particle size was adjusted in the range of 50 to 500 nm by carefully controlling hydrolysis time, and calcined at 650 °C for two hours. This process was repeated with different boehmite solutions in order to achieve an intermediate layer with an average pore size of about 50 nm. The resulting substrate was then coated with a thin and dense layer of silica by counter current chemical vapour deposition (CVD) method. A boehmite sol with 10 wt.% of nickel which was prepared by a standard procedure was used to make the catalytic layer. BET, SEM, and XRD analysis were used to characterize this layer. The catalytic membrane reactor was placed in an experimental setup to evaluate the permeation and hydrogen separation performance for a steam reforming reaction. The setup consisted of a tubular module in which the membrane was fixed, and the reforming reaction occurred at the inner side of the membrane. Methane stream, diluted with nitrogen, and deionized water with a steam to carbon (S/C) ratio of 3.0 entered the reactor after the reactor was heated up to 500 °C with a specified rate of 2 °C/ min and the catalytic layer was reduced at presence of hydrogen for 2.5 hours. Nitrogen flow was used as sweep gas through the outer side of the reactor. Any liquid produced was trapped and separated at reactor exit by a cold trap, and the produced gases were analyzed by an on-line gas chromatograph (Agilent 7890A) to measure total CH4 conversion and H2 permeation. BET analysis indicated uniform size distribution for catalyst with average pore size of 280 nm and average surface area of 275 m2.g-1. Single-component permeation tests were carried out for hydrogen, methane, and carbon dioxide at temperature range of 500-800 °C, and the results showed almost the same permeance and hydrogen selectivity values for hydrogen as the composite membrane without catalytic layer. Performance of the catalytic membrane was evaluated by applying membranes as a membrane reactor for methane steam reforming reaction at gas hourly space velocity (GHSV) of 10,000 h−1 and 2 bar. CH4 conversion increased from 50% to 85% with increasing reaction temperature from 600 °C to 750 °C, which is sufficiently above equilibrium curve at reaction conditions, but slightly lower than membrane reactor with packed nickel catalytic bed because of its higher surface area compared to the catalytic layer.

Keywords: catalytic membrane, hydrogen, methane steam reforming, permeance

Procedia PDF Downloads 255
708 High Resolution Satellite Imagery and Lidar Data for Object-Based Tree Species Classification in Quebec, Canada

Authors: Bilel Chalghaf, Mathieu Varin

Abstract:

Forest characterization in Quebec, Canada, is usually assessed based on photo-interpretation at the stand level. For species identification, this often results in a lack of precision. Very high spatial resolution imagery, such as DigitalGlobe, and Light Detection and Ranging (LiDAR), have the potential to overcome the limitations of aerial imagery. To date, few studies have used that data to map a large number of species at the tree level using machine learning techniques. The main objective of this study is to map 11 individual high tree species ( > 17m) at the tree level using an object-based approach in the broadleaf forest of Kenauk Nature, Quebec. For the individual tree crown segmentation, three canopy-height models (CHMs) from LiDAR data were assessed: 1) the original, 2) a filtered, and 3) a corrected model. The corrected CHM gave the best accuracy and was then coupled with imagery to refine tree species crown identification. When compared with photo-interpretation, 90% of the objects represented a single species. For modeling, 313 variables were derived from 16-band WorldView-3 imagery and LiDAR data, using radiance, reflectance, pixel, and object-based calculation techniques. Variable selection procedures were employed to reduce their number from 313 to 16, using only 11 bands to aid reproducibility. For classification, a global approach using all 11 species was compared to a semi-hierarchical hybrid classification approach at two levels: (1) tree type (broadleaf/conifer) and (2) individual broadleaf (five) and conifer (six) species. Five different model techniques were used: (1) support vector machine (SVM), (2) classification and regression tree (CART), (3) random forest (RF), (4) k-nearest neighbors (k-NN), and (5) linear discriminant analysis (LDA). Each model was tuned separately for all approaches and levels. For the global approach, the best model was the SVM using eight variables (overall accuracy (OA): 80%, Kappa: 0.77). With the semi-hierarchical hybrid approach, at the tree type level, the best model was the k-NN using six variables (OA: 100% and Kappa: 1.00). At the level of identifying broadleaf and conifer species, the best model was the SVM, with OA of 80% and 97% and Kappa values of 0.74 and 0.97, respectively, using seven variables for both models. This paper demonstrates that a hybrid classification approach gives better results and that using 16-band WorldView-3 with LiDAR data leads to more precise predictions for tree segmentation and classification, especially when the number of tree species is large.

Keywords: tree species, object-based, classification, multispectral, machine learning, WorldView-3, LiDAR

Procedia PDF Downloads 131
707 Digital Image Correlation: Metrological Characterization in Mechanical Analysis

Authors: D. Signore, M. Ferraiuolo, P. Caramuta, O. Petrella, C. Toscano

Abstract:

The Digital Image Correlation (DIC) is a newly developed optical technique that is spreading in all engineering sectors because it allows the non-destructive estimation of the entire surface deformation without any contact with the component under analysis. These characteristics make the DIC very appealing in all the cases the global deformation state is to be known without using strain gages, which are the most used measuring device. The DIC is applicable to any material subjected to distortion caused by either thermal or mechanical load, allowing to obtain high-definition mapping of displacements and deformations. That is why in the civil and the transportation industry, DIC is very useful for studying the behavior of metallic materials as well as of composite materials. DIC is also used in the medical field for the characterization of the local strain field of the vascular tissues surface subjected to uniaxial tensile loading. DIC can be carried out in the two dimension mode (2D DIC) if a single camera is used or in a three dimension mode (3D DIC) if two cameras are involved. Each point of the test surface framed by the cameras can be associated with a specific pixel of the image, and the coordinates of each point are calculated knowing the relative distance between the two cameras together with their orientation. In both arrangements, when a component is subjected to a load, several images related to different deformation states can be are acquired through the cameras. A specific software analyzes the images via the mutual correlation between the reference image (obtained without any applied load) and those acquired during the deformation giving the relative displacements. In this paper, a metrological characterization of the digital image correlation is performed on aluminum and composite targets both in static and dynamic loading conditions by comparison between DIC and strain gauges measures. In the static test, interesting results have been obtained thanks to an excellent agreement between the two measuring techniques. In addition, the deformation detected by the DIC is compliant with the result of a FEM simulation. In the dynamic test, the DIC was able to follow with a good accuracy the periodic deformation of the specimen giving results coherent with the ones given by FEM simulation. In both situations, it was seen that the DIC measurement accuracy depends on several parameters such as the optical focusing, the parameters chosen to perform the mutual correlation between the images and, finally, the reference points on image to be analyzed. In the future, the influence of these parameters will be studied, and a method to increase the accuracy of the measurements will be developed in accordance with the requirements of the industries especially of the aerospace one.

Keywords: accuracy, deformation, image correlation, mechanical analysis

Procedia PDF Downloads 310
706 A Look into Surgical Site Infections: Impact of Collective Interventions

Authors: Lisa Bennett, Cynthia Walters, Cynthia Argani, Andy Satin, Geeta Sood, Kerri Huber, Lisa Grubb, Woodrow Noble, Melissa Eichelberger, Darlene Zinalabedini, Eric Ausby, Jeffrey Snyder, Kevin Kirchoff

Abstract:

Background: Surgical site infections (SSIs) within the obstetric population pose a variety of complications, creating clinical and personal challenges for the new mother and her neonate during the postpartum period. Our journey to achieve compliance with the SSI core measure for cesarean sections revealed many opportunities to improve these outcomes. Objective: Achieve and sustain core measure compliance keeping surgical site infection rates below the national benchmark pooled mean of 1.8% in post-operative patients, who delivered via cesarean section at the Johns Hopkins Bayview Medical Center. Methods: A root cause analysis was performed and revealed several environmental, pharmacologic, and clinical practice opportunities for improvement. A multidisciplinary approach led by the OB Safety Nurse, OB Medical Director, and Infectious Disease Department resulted in the implementation of fourteen interventions over a twenty-month period. Interventions included: post-operative dressing changes, standardizing operating room attire, broadening pre-operative antibiotics, initiating vaginal preps, improving operating room terminal cleaning, testing air quality, and re-educating scrub technicians on technique. Results: Prior to the implementation of our interventions, the SSI quarterly rate in Obstetrics peaked at 6.10%. Although no single intervention resulted in dramatic improvement, after implementation of all fourteen interventions, the quarterly SSI rate has subsequently ranged from to 0.0% to 2.70%. Significance: Taking an introspective look at current practices can reveal opportunities for improvement which previously were not considered. Collectively the benefit of these interventions has shown a significant decrease in surgical site infection rates. The impact of this quality improvement project highlights the synergy created when members of the multidisciplinary team work in collaboration to improve patient safety, and achieve a high quality of care.

Keywords: cesarean section, surgical site infection, collaboration and teamwork, patient safety, quality improvement

Procedia PDF Downloads 481
705 Two-Stage Estimation of Tropical Cyclone Intensity Based on Fusion of Coarse and Fine-Grained Features from Satellite Microwave Data

Authors: Huinan Zhang, Wenjie Jiang

Abstract:

Accurate estimation of tropical cyclone intensity is of great importance for disaster prevention and mitigation. Existing techniques are largely based on satellite imagery data, and research and utilization of the inner thermal core structure characteristics of tropical cyclones still pose challenges. This paper presents a two-stage tropical cyclone intensity estimation network based on the fusion of coarse and fine-grained features from microwave brightness temperature data. The data used in this network are obtained from the thermal core structure of tropical cyclones through the Advanced Technology Microwave Sounder (ATMS) inversion. Firstly, the thermal core information in the pressure direction is comprehensively expressed through the maximal intensity projection (MIP) method, constructing coarse-grained thermal core images that represent the tropical cyclone. These images provide a coarse-grained feature range wind speed estimation result in the first stage. Then, based on this result, fine-grained features are extracted by combining thermal core information from multiple view profiles with a distributed network and fused with coarse-grained features from the first stage to obtain the final two-stage network wind speed estimation. Furthermore, to better capture the long-tail distribution characteristics of tropical cyclones, focal loss is used in the coarse-grained loss function of the first stage, and ordinal regression loss is adopted in the second stage to replace traditional single-value regression. The selection of tropical cyclones spans from 2012 to 2021, distributed in the North Atlantic (NA) regions. The training set includes 2012 to 2017, the validation set includes 2018 to 2019, and the test set includes 2020 to 2021. Based on the Saffir-Simpson Hurricane Wind Scale (SSHS), this paper categorizes tropical cyclone levels into three major categories: pre-hurricane, minor hurricane, and major hurricane, with a classification accuracy rate of 86.18% and an intensity estimation error of 4.01m/s for NA based on this accuracy. The results indicate that thermal core data can effectively represent the level and intensity of tropical cyclones, warranting further exploration of tropical cyclone attributes under this data.

Keywords: Artificial intelligence, deep learning, data mining, remote sensing

Procedia PDF Downloads 61
704 Safety Evaluation of Intramuscular Administration of Zuprevo® Compared to Draxxin® in the Treatment of Swine Respiratory Disease at Weaning Age

Authors: Josine Beek, S. Agten, R. Del Pozo, B. Balis

Abstract:

The objective of the present study was to compare the safety of intramuscular administration of Zuprevo® (tildipirosin, 40 mg/mL) with Draxxin® (tulathromycin, 100 mg/mL) in the treatment of swine respiratory disease at weaning age. The trial was carried out in two farrow-to-finish farms with 300 sows (farm A) and 500 sows (farm B) in a batch-production system. Farm A had no history of respiratory problems, whereas farm B had a history of respiratory outbreaks with increased mortality ( > 2%) in the nursery. Both farms were positive to Pasteurella multocida, Bordetella bronchiseptica, Actinobacillus pleuropneumoniae and Haemophilus parasuis. From each farm, one batch of piglets was included (farm A: 644 piglets; farm B: 963 piglets). One day before weaning (day 0; 18-21 days of age), piglets were identified by an individual ear tag and randomly assigned to a treatment group. At day 0, Group 1 was treated with a single intramuscular injection with Zuprevo® (tildipirosin, 40 mg/mL; 1 mL/10 kg) and group 2 with Draxxin® (tulathromycin, 100 mg/mL; 1 mL/40 kg). For practical reasons, dosage of the product was adjusted according to three weight categories: < 4 kg, 4-6 kg and > 6 kg. Within each farm, piglets of both groups were comingled at weaning and subsequently managed and located in the same facilities and with identical environmental conditions. Our study involved the period from day 0 until 10 weeks of age. Safety of treatment was evaluated by 1) visual examination for signs of discomfort directly after treatment and after 15 min, 1 h and 24 h and 2) mortality rate within 24 h after treatment. Efficacy of treatment was evaluated based on mortality rate from day 0 until 10 weeks of age. Each piglet that died during the study period was necropsied by the herd veterinarian to determine the probable cause of death. Data were analyzed using binary logistic regression and differences were considered significant if p < 0.05. The pig was the experimental unit. In total, 848 piglets were treated with tildipirosin and 759 piglets with tulathromycin. In farm A, one piglet with retarded growth ( < 1 kg at 18 days of age) showed an adverse reaction after injection of tildipirosin: lateral recumbence and dullness for ± 30 sec. The piglet recovered after 1-2 min. This adverse reaction was probably due to overdosing (12 mg/kg). No adverse effect of treatment was observed in any other piglet. There was no mortality within 24 h after treatment. No significant difference was found in mortality rate between both groups from day 0 until 10 weeks of age. In farm A, overall mortality rate was 0.3% (2/644). In farm B, mortality rate was 0.2% (1/502) in group 1 (tildipirosin) and 0.9% (4/461) in group 2 (tulathromycin)(p=0.60). The necropsy of piglets that died during the study period revealed no macroscopic lesions of the respiratory tract. In conclusion, Zuprevo® (tildipirosin, 40 mg/mL) was shown to be a safe and efficacious alternative to Draxxin® (tulathromycin, 100 mg/mL) for the early treatment of swine respiratory disease at weaning age.

Keywords: antibiotic treatment, safety, swine respiratory disease, tildipirosin

Procedia PDF Downloads 394
703 Term Creation in Specialized Fields: An Evaluation of Shona Phonetics and Phonology Terminology at Great Zimbabwe University

Authors: Peniah Mabaso-Shamano

Abstract:

The paper evaluates Shona terms that were created to teach Phonetics and Phonology courses at Great Zimbabwe University (GZU). The phonetics and phonology terms to be discussed in this paper were created using different processes and strategies such as translation, borrowing, neologising, compounding, transliteration, circumlocution among many others. Most phonetics and phonology terms are alien to Shona and as a result, there are no suitable Shona equivalents. The lecturers and students for these courses have a mammoth task of creating terminology for the different modules offered in Shona and other Zimbabwean indigenous languages. Most linguistic reference books are written in English. As such, lecturers and students translate information from English to Shona, a measure which is proving to be too difficult for them. A term creation workshop was held at GZU to try to address the problem of lack of terminology in indigenous languages. Different indigenous language practitioners from different tertiary institutions convened for a two-day workshop at GZU. Due to the 'specialized' nature of phonetics and phonology, it was too difficult to come up with 'proper' indigenous terms. The researcher will consult tertiary institutions lecturers who teach linguistics courses and linguistics students to get their views on the created terms. The people consulted will not be the ones who took part in the term creation workshop held at GZU. The selected participants will be asked to evaluate and back-translate some of the terms. In instances where they feel the terms created are not suitable or user-friendly, they will be asked to suggest other terms. Since the researcher is also a linguistics lecturer, her observation and views will be important. From her experience in using some of the terms in teaching phonetics and phonology courses to undergraduate students, the researcher noted that most of the terms created have shortcomings since they are not user-friendly. These shortcomings include terms longer than the English terms as some terms are translated to Shona through a whole statement. Most of these terms are neologisms, compound neologisms, transliterations, circumlocutions, and blends. The paper will show that there is overuse of transliterated terms due to the lack of Shona equivalents for English terms. Most single English words were translated into compound neologisms or phrases after attempts to reduce them to one word terms failed. In other instances, circumlocution led to the problem of creating longer terms than the original and as a result, the terms are not user-friendly. The paper will discuss and evaluate the different phonetics and phonology terms created and the different strategies and processes used in creating them.

Keywords: blending, circumlocution, term creation, translation

Procedia PDF Downloads 144
702 Exploration and Evaluation of the Effect of Multiple Countermeasures on Road Safety

Authors: Atheer Al-Nuaimi, Harry Evdorides

Abstract:

Every day many people die or get disabled or injured on roads around the world, which necessitates more specific treatments for transportation safety issues. International road assessment program (iRAP) model is one of the comprehensive road safety models which accounting for many factors that affect road safety in a cost-effective way in low and middle income countries. In iRAP model road safety has been divided into five star ratings from 1 star (the lowest level) to 5 star (the highest level). These star ratings are based on star rating score which is calculated by iRAP methodology depending on road attributes, traffic volumes and operating speeds. The outcome of iRAP methodology are the treatments that can be used to improve road safety and reduce fatalities and serious injuries (FSI) numbers. These countermeasures can be used separately as a single countermeasure or mix as multiple countermeasures for a location. There is general agreement that the adequacy of a countermeasure is liable to consistent losses when it is utilized as a part of mix with different countermeasures. That is, accident diminishment appraisals of individual countermeasures cannot be easily added together. The iRAP model philosophy makes utilization of a multiple countermeasure adjustment factors to predict diminishments in the effectiveness of road safety countermeasures when more than one countermeasure is chosen. A multiple countermeasure correction factors are figured for every 100-meter segment and for every accident type. However, restrictions of this methodology incorporate a presumable over-estimation in the predicted crash reduction. This study aims to adjust this correction factor by developing new models to calculate the effect of using multiple countermeasures on the number of fatalities for a location or an entire road. Regression models have been used to establish relationships between crash frequencies and the factors that affect their rates. Multiple linear regression, negative binomial regression, and Poisson regression techniques were used to develop models that can address the effectiveness of using multiple countermeasures. Analyses are conducted using The R Project for Statistical Computing showed that a model developed by negative binomial regression technique could give more reliable results of the predicted number of fatalities after the implementation of road safety multiple countermeasures than the results from iRAP model. The results also showed that the negative binomial regression approach gives more precise results in comparison with multiple linear and Poisson regression techniques because of the overdispersion and standard error issues.

Keywords: international road assessment program, negative binomial, road multiple countermeasures, road safety

Procedia PDF Downloads 239
701 Bayesian Estimation of Hierarchical Models for Genotypic Differentiation of Arabidopsis thaliana

Authors: Gautier Viaud, Paul-Henry Cournède

Abstract:

Plant growth models have been used extensively for the prediction of the phenotypic performance of plants. However, they remain most often calibrated for a given genotype and therefore do not take into account genotype by environment interactions. One way of achieving such an objective is to consider Bayesian hierarchical models. Three levels can be identified in such models: The first level describes how a given growth model describes the phenotype of the plant as a function of individual parameters, the second level describes how these individual parameters are distributed within a plant population, the third level corresponds to the attribution of priors on population parameters. Thanks to the Bayesian framework, choosing appropriate priors for the population parameters permits to derive analytical expressions for the full conditional distributions of these population parameters. As plant growth models are of a nonlinear nature, individual parameters cannot be sampled explicitly, and a Metropolis step must be performed. This allows for the use of a hybrid Gibbs--Metropolis sampler. A generic approach was devised for the implementation of both general state space models and estimation algorithms within a programming platform. It was designed using the Julia language, which combines an elegant syntax, metaprogramming capabilities and exhibits high efficiency. Results were obtained for Arabidopsis thaliana on both simulated and real data. An organ-scale Greenlab model for the latter is thus presented, where the surface areas of each individual leaf can be simulated. It is assumed that the error made on the measurement of leaf areas is proportional to the leaf area itself; multiplicative normal noises for the observations are therefore used. Real data were obtained via image analysis of zenithal images of Arabidopsis thaliana over a period of 21 days using a two-step segmentation and tracking algorithm which notably takes advantage of the Arabidopsis thaliana phyllotaxy. Since the model formulation is rather flexible, there is no need that the data for a single individual be available at all times, nor that the times at which data is available be the same for all the different individuals. This allows to discard data from image analysis when it is not considered reliable enough, thereby providing low-biased data in large quantity for leaf areas. The proposed model precisely reproduces the dynamics of Arabidopsis thaliana’s growth while accounting for the variability between genotypes. In addition to the estimation of the population parameters, the level of variability is an interesting indicator of the genotypic stability of model parameters. A promising perspective is to test whether some of the latter should be considered as fixed effects.

Keywords: bayesian, genotypic differentiation, hierarchical models, plant growth models

Procedia PDF Downloads 302
700 Learning with Music: The Effects of Musical Tension on Long-Term Declarative Memory Formation

Authors: Nawras Kurzom, Avi Mendelsohn

Abstract:

The effects of background music on learning and memory are inconsistent, partly due to the intrinsic complexity and variety of music and partly to individual differences in music perception and preference. A prominent musical feature that is known to elicit strong emotional responses is musical tension. Musical tension can be brought about by building anticipation of rhythm, harmony, melody, and dynamics. Delaying the resolution of dominant-to-tonic chord progressions, as well as using dissonant harmonics, can elicit feelings of tension, which can, in turn, affect memory formation of concomitant information. The aim of the presented studies was to explore how forming declarative memory is influenced by musical tension, brought about within continuous music as well as in the form of isolated chords with varying degrees of dissonance/consonance. The effects of musical tension on long-term memory of declarative information were studied in two ways: 1) by evoking tension within continuous music pieces by delaying the release of harmonic progressions from dominant to tonic chords, and 2) by using isolated single complex chords with various degrees of dissonance/roughness. Musical tension was validated through subjective reports of tension, as well as physiological measurements of skin conductance response (SCR) and pupil dilation responses to the chords. In addition, music information retrieval (MIR) was used to quantify musical properties associated with tension and its release. Each experiment included an encoding phase, wherein individuals studied stimuli (words or images) with different musical conditions. Memory for the studied stimuli was tested 24 hours later via recognition tasks. In three separate experiments, we found positive relationships between tension perception and physiological measurements of SCR and pupil dilation. As for memory performance, we found that background music, in general, led to superior memory performance as compared to silence. We detected a trade-off effect between tension perception and memory, such that individuals who perceived musical tension as such displayed reduced memory performance for images encoded during musical tension, whereas tense music benefited memory for those who were less sensitive to the perception of musical tension. Musical tension exerts complex interactions with perception, emotional responses, and cognitive performance on individuals with and without musical training. Delineating the conditions and mechanisms that underlie the interactions between musical tension and memory can benefit our understanding of musical perception at large and the diverse effects that music has on ongoing processing of declarative information.

Keywords: musical tension, declarative memory, learning and memory, musical perception

Procedia PDF Downloads 97
699 Comparative Study on Fire Safety Evaluation Methods for External Cladding Systems: ISO 13785-2 and BS 8414

Authors: Kyungsuk Cho, H. Y. Kim, S. U. Chae, J. H. Choi

Abstract:

Technological development has led to the construction of super-tall buildings and insulators are increasingly used as exterior finishing materials to save energy. However, insulators are usually combustible and vulnerable to fire. Fires like that at Wooshin Golden Suite Building in Busan, Korea in 2010 and that at CCTV Building in Beijing, China are the major examples of fire spread accelerated by combustible insulators. The exterior finishing materials of a high-rise building are not made of insulators only, but they are integrated with the building’s external cladding system. There is a limit in evaluating the fire safety of a cladding system with a single small-unit material such as a cone calorimeter. Therefore, countries provide codes to evaluate the fire safety of exterior finishing materials using full-scale tests. This study compares and to examine the applicability of the methods to Korea. Standard analysis showed differences in the type and size of fire sources and duration and exterior finishing materials also differed in size. In order to confirm the differences, fire tests were conducted on identical external cladding systems to compare fire safety. Although the exterior finishing materials were identical, varying degrees of fire spread were observed, which could be considered as differences in the type and size of the fire sources and duration. Therefore, it is deduced that extended studies should be conducted before the evaluation methods and standards are employed in Korea. The two standards for evaluating fire safety provided different results. Peak heat release rate was 5.5MW in ISO method and 3.0±0.5MW in BS method. Peak heat release rate in ISO method continued for 15 minutes. Fire ignition, growth, full development and decay evolved for 30 minutes in BS method where wood cribs were used as fire sources. Therefore, follow-up studies should be conducted to determine which of the two standards provides fire sources that approximate the size of flames coming out from the openings or those spreading to the outside when a fire occurs at a high-rise building.

Keywords: external cladding systems, fire safety evaluation, ISO 13785-2, BS 8414

Procedia PDF Downloads 241
698 Cancer Survivor’s Adherence to Healthy Lifestyle Behaviours; Meeting the World Cancer Research Fund/American Institute of Cancer Research Recommendations, a Systematic Review and Meta-Analysis

Authors: Daniel Nigusse Tollosa, Erica James, Alexis Hurre, Meredith Tavener

Abstract:

Introduction: Lifestyle behaviours such as healthy diet, regular physical activity and maintaining a healthy weight are essential for cancer survivors to improve the quality of life and longevity. However, there is no study that synthesis cancer survivor’s adherence to healthy lifestyle recommendations. The purpose of this review was to collate existing data on the prevalence of adherence to healthy behaviours and produce the pooled estimate among adult cancer survivors. Method: Multiple databases (Embase, Medline, Scopus, Web of Science and Google Scholar) were searched for relevant articles published since 2007, reporting cancer survivors adherence to more than two lifestyle behaviours based on the WCRF/AICR recommendations. The pooled prevalence of adherence to single and multiple behaviours (operationalized as adherence to more than 75% (3/4) of health behaviours included in a particular study) was calculated using a random effects model. Subgroup analysis adherence to multiple behaviours was undertaken corresponding to the mean survival years and year of publication. Results: A total of 3322 articles were generated through our search strategies. Of these, 51 studies matched our inclusion criteria, which presenting data from 2,620,586 adult cancer survivors. The highest prevalence of adherence was observed for smoking (pooled estimate: 87%, 95% CI: 85%, 88%) and alcohol intake (pooled estimate 83%, 95% CI: 81%, 86%), and the lowest was for fiber intake (pooled estimate: 31%, 95% CI: 21%, 40%). Thirteen studies were reported the proportion of cancer survivors (all used a simple summative index method) to multiple healthy behaviours, whereby the prevalence of adherence was ranged from 7% to 40% (pooled estimate 23%, 95% CI: 17% to 30%). Subgroup analysis suggest that short-term survivors ( < 5 years survival time) had relatively a better adherence to multiple behaviours (pooled estimate: 31%, 95% CI: 27%, 35%) than long-term ( > 5 years survival time) cancer survivors (pooled estimate: 25%, 95% CI: 14%, 36%). Pooling of estimates according to the year of publication (since 2007) also suggests an increasing trend of adherence to multiple behaviours over time. Conclusion: Overall, the adherence to multiple lifestyle behaviors was poor (not satisfactory), and relatively, it is a major concern for long-term than the short-term cancer survivor. Cancer survivors need to obey with healthy lifestyle recommendations related to physical activity, fruit and vegetable, fiber, red/processed meat and sodium intake.

Keywords: adherence, lifestyle behaviours, cancer survivors, WCRF/AICR

Procedia PDF Downloads 182
697 Simulation and Characterization of Stretching and Folding in Microchannel Electrokinetic Flows

Authors: Justo Rodriguez, Daming Chen, Amador M. Guzman

Abstract:

The detection, treatment, and control of rapidly propagating, deadly viruses such as COVID-19, require the development of inexpensive, fast, and accurate devices to address the urgent needs of the population. Microfluidics-based sensors are amongst the different methods and techniques for detection that are easy to use. A micro analyzer is defined as a microfluidics-based sensor, composed of a network of microchannels with varying functions. Given their size, portability, and accuracy, they are proving to be more effective and convenient than other solutions. A micro analyzer based on the concept of “Lab on a Chip” presents advantages concerning other non-micro devices due to its smaller size, and it is having a better ratio between useful area and volume. The integration of multiple processes in a single microdevice reduces both the number of necessary samples and the analysis time, leading the next generation of analyzers for the health-sciences. In some applications, the flow of solution within the microchannels is originated by a pressure gradient, which can produce adverse effects on biological samples. A more efficient and less dangerous way of controlling the flow in a microchannel-based analyzer is applying an electric field to induce the fluid motion and either enhance or suppress the mixing process. Electrokinetic flows are characterized by no less than two non-dimensional parameters: the electric Rayleigh number and its geometrical aspect ratio. In this research, stable and unstable flows have been studied numerically (and when possible, will be experimental) in a T-shaped microchannel. Additionally, unstable electrokinetic flows for Rayleigh numbers higher than critical have been characterized. The flow mixing enhancement was quantified in relation to the stretching and folding that fluid particles undergo when they are subjected to supercritical electrokinetic flows. Computational simulations were carried out using a finite element-based program while working with the flow mixing concepts developed by Gollub and collaborators. Hundreds of seeded massless particles were tracked along the microchannel from the entrance to exit for both stable and unstable flows. After post-processing, their trajectories, the folding and stretching values for the different flows were found. Numerical results show that for supercritical electrokinetic flows, the enhancement effects of the folding and stretching processes become more apparent. Consequently, there is an improvement in the mixing process, ultimately leading to a more homogenous mixture.

Keywords: microchannel, stretching and folding, electro kinetic flow mixing, micro-analyzer

Procedia PDF Downloads 124
696 In silico Designing of Imidazo [4,5-b] Pyridine as a Probable Lead for Potent Decaprenyl Phosphoryl-β-D-Ribose 2′-Epimerase (DprE1) Inhibitors as Antitubercular Agents

Authors: Jineetkumar Gawad, Chandrakant Bonde

Abstract:

Tuberculosis (TB) is a major worldwide concern whose control has been exacerbated by HIV, the rise of multidrug-resistance (MDR-TB) and extensively drug resistance (XDR-TB) strains of Mycobacterium tuberculosis. The interest for newer and faster acting antitubercular drugs are more remarkable than any time. To search potent compounds is need and challenge for researchers. Here, we tried to design lead for inhibition of Decaprenyl phosphoryl-β-D-ribose 2′-epimerase (DprE1) enzyme. Arabinose is an essential constituent of mycobacterial cell wall. DprE1 is a flavoenzyme that converts decaprenylphosphoryl-D-ribose into decaprenylphosphoryl-2-keto-ribose, which is intermediate in biosynthetic pathway of arabinose. Latter, DprE2 converts keto-ribose into decaprenylphosphoryl-D-arabinose. We had a selection of 23 compounds from azaindole series for computational study, and they were drawn using marvisketch. Ligands were prepared using Maestro molecular modeling interface, Schrodinger, v10.5. Common pharmacophore hypotheses were developed by applying dataset thresholds to yield active and inactive set of compounds. There were 326 hypotheses were developed. On the basis of survival score, ADRRR (Survival Score: 5.453) was selected. Selected pharmacophore hypotheses were subjected to virtual screening results into 1000 hits. Hits were prepared and docked with protein 4KW5 (oxydoreductase inhibitor) was downloaded in .pdb format from RCSB Protein Data Bank. Protein was prepared using protein preparation wizard. Protein was preprocessed, the workspace was analyzed using force field OPLS 2005. Glide grid was generated by picking single atom in molecule. Prepared ligands were docked with prepared protein 4KW5 using Glide docking. After docking, on the basis of glide score top-five compounds were selected, (5223, 5812, 0661, 0662, and 2945) and the glide docking score (-8.928, -8.534, -8.412, -8.411, -8.351) respectively. There were interactions of ligand and protein, specifically HIS 132, LYS 418, TRY 230, ASN 385. Pi-pi stacking was observed in few compounds with basic Imidazo [4,5-b] pyridine ring. We had basic azaindole ring in parent compounds, but after glide docking, we received compounds with Imidazo [4,5-b] pyridine as a basic ring. That might be the new lead in the process of drug discovery.

Keywords: DprE1 inhibitors, in silico drug designing, imidazo [4, 5-b] pyridine, lead, tuberculosis

Procedia PDF Downloads 153
695 Boko Haram Insurrection and Religious Revolt in Nigeria: An Impact Assessment-{2009-2015}

Authors: Edwin Dankano

Abstract:

Evident by incessant and sporadic attacks on Nigerians poise a serious threat to the unity of Nigeria, and secondly, the single biggest security nightmare to confront Nigeria since after amalgamation of the Southern and Northern protectorates by the British colonialist in 1914 is “Boko Haram” a terrorist organization also known as “Jama’atul Ahli Sunnah Lidda’wati wal Jihad”, or “people committed to the propagation of the Prophet’s teachings and jihad”. The sect also upholds an ideology translated as “Western Education is forbidden”, or rejection of Western civilization and institutions. By some estimates, more than 5,500 people were killed in Boko Haram attacks in 2014, and Boko Haram attacks have already claimed hundreds of lives and territories {caliphates}in early 2015. In total, the group may have killed more than 10,000 people since its emergence in the early 2000s. More than 1 million Nigerians have been displaced internally by the violence, and Nigerian refugee figures in neighboring countries continue to rise. This paper is predicated on secondary sources of data and anchored on the Huntington’s theory of clash of civilization. As such, the paper argued that the rise of Boko Haram with its violent disposition against Western values is a counter response to Western civilization that is fast eclipsing other civilizations. The paper posits that the Boko Haram insurrection going by its teachings, and destruction of churches is a validation of the propagation of the sect as a religious revolt which has resulted in dire humanitarian situation in Adamawa, Borno, Yobe, Bauchi, and Gombe states all in north eastern Nigeria as evident in human casualties, human right abuses, population displacement, refugee debacle, livelihood crisis, and public insecurity. The paper submits that the Nigerian state should muster the needed political will in terms of a viable anti-terrorism measures and build strong legitimate institutions that can adequately curb the menace of corruption that has engulfed the military hierarchy, respond proactively to the challenge of terrorism in Nigeria and should embrace a strategic paradigm shift from anti-terrorism to counter-terrorism as a strategy for containing the crisis that today threatens the secular status of Nigeria.

Keywords: Boko Haram, civilization, fundamentalism, Islam, religion revolt, terror

Procedia PDF Downloads 398
694 Finite Element Molecular Modeling: A Structural Method for Large Deformations

Authors: A. Rezaei, M. Huisman, W. Van Paepegem

Abstract:

Atomic interactions in molecular systems are mainly studied by particle mechanics. Nevertheless, researches have also put on considerable effort to simulate them using continuum methods. In early 2000, simple equivalent finite element models have been developed to study the mechanical properties of carbon nanotubes and graphene in composite materials. Afterward, many researchers have employed similar structural simulation approaches to obtain mechanical properties of nanostructured materials, to simplify interface behavior of fiber-reinforced composites, and to simulate defects in carbon nanotubes or graphene sheets, etc. These structural approaches, however, are limited to small deformations due to complicated local rotational coordinates. This article proposes a method for the finite element simulation of molecular mechanics. For ease in addressing the approach, here it is called Structural Finite Element Molecular Modeling (SFEMM). SFEMM method improves the available structural approaches for large deformations, without using any rotational degrees of freedom. Moreover, the method simulates molecular conformation, which is a big advantage over the previous approaches. Technically, this method uses nonlinear multipoint constraints to simulate kinematics of the atomic multibody interactions. Only truss elements are employed, and the bond potentials are implemented through constitutive material models. Because the equilibrium bond- length, bond angles, and bond-torsion potential energies are intrinsic material parameters, the model is independent of initial strains or stresses. In this paper, the SFEMM method has been implemented in ABAQUS finite element software. The constraints and material behaviors are modeled through two Fortran subroutines. The method is verified for the bond-stretch, bond-angle and bond-torsion of carbon atoms. Furthermore, the capability of the method in the conformation simulation of molecular structures is demonstrated via a case study of a graphene sheet. Briefly, SFEMM builds up a framework that offers more flexible features over the conventional molecular finite element models, serving the structural relaxation modeling and large deformations without incorporating local rotational degrees of freedom. Potentially, the method is a big step towards comprehensive molecular modeling with finite element technique, and thereby concurrently coupling an atomistic domain to a solid continuum domain within a single finite element platform.

Keywords: finite element, large deformation, molecular mechanics, structural method

Procedia PDF Downloads 151
693 Imaging of Underground Targets with an Improved Back-Projection Algorithm

Authors: Alireza Akbari, Gelareh Babaee Khou

Abstract:

Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.

Keywords: algorithm, back-projection, GPR, remote sensing

Procedia PDF Downloads 450
692 Sequential Padding: A Method to Improve the Impact Resistance in Body Armor Materials

Authors: Ankita Srivastava, Bhupendra S. Butola, Abhijit Majumdar

Abstract:

Application of shear thickening fluid (STF) has been proved to increase the impact resistance performance of the textile structures to further use it as a body armor material. In the present research, STF was applied on Kevlar woven fabric to make the structure lightweight and flexible while improving its impact resistance performance. It was observed that getting a fair amount of add-on of STF on Kevlar fabric is difficult as Kevlar fabric comes with a pre-coating of PTFE which hinders its absorbency. Hence, a method termed as sequential padding is developed in the present study to improve the add-on of STF on Kevlar fabric. Contrary to the conventional process, where Kevlar fabric is treated with STF once using any one pressure, in sequential padding method, the Kevlar fabrics were treated twice in a sequential manner using combination of two pressures together in a sample. 200 GSM Kevlar fabrics were used in the present study. STF was prepared by adding PEG with 70% (w/w) nano-silica concentration. Ethanol was added with the STF at a fixed ratio to reduce viscosity. A high-speed homogenizer was used to make the dispersion. Total nine STF treated Kevlar fabric samples were prepared by using varying combinations and sequences of three levels of padding pressure {0.5, 1.0 and 2.0 bar). The fabrics were dried at 80°C for 40 minutes in a hot air oven to evaporate ethanol. Untreated and STF treated fabrics were tested for add-on%. Impact resistance performance of samples was also tested on dynamic impact tester at a fixed velocity of 6 m/s. Further, to observe the impact resistance performance in actual condition, low velocity ballistic test with 165 m/s velocity was also performed to confirm the results of impact resistance test. It was observed that both add-on% and impact energy absorption of Kevlar fabrics increases significantly with sequential padding process as compared to untreated as well as single stage padding process. It was also determined that impact energy absorption is significantly better in STF treated Kevlar fabrics when 1st padding pressure is higher, and 2nd padding pressure is lower. It is also observed that impact energy absorption of sequentially padded Kevlar fabric shows almost 125% increase in ballistic impact energy absorption (40.62 J) as compared to untreated fabric (18.07 J).The results are owing to the fact that the treatment of fabrics at high pressure during the first padding is responsible for uniform distribution of STF within the fabric structures. While padding with second lower pressure ensures the high add-on of STF for over-all improvement in the impact resistance performance of the fabric. Therefore, it is concluded that sequential padding process may help to improve the impact performance of body armor materials based on STF treated Kevlar fabrics.

Keywords: body armor, impact resistance, Kevlar, shear thickening fluid

Procedia PDF Downloads 238
691 Effects of Bleaching Procedures on Dentine Sensitivity

Authors: Suhayla Reda Al-Banai

Abstract:

Problem Statement: Tooth whitening was used for over one hundred and fifty year. The question concerning the whiteness of teeth is a complex one since tooth whiteness will vary from individual to individual, dependent on age and culture, etc. Tooth whitening following treatment may be dependent on the type of whitening system used to whiten the teeth. There are a few side-effects to the process, and these include tooth sensitivity and gingival irritation. Some individuals may experience no pain or sensitivity following the procedure. Purpose: To systematically review the available published literature until 31st December 2021 to identify all relevant studies for inclusion and to determine whether there was any evidence demonstrating that the application of whitening procedures resulted in the tooth sensitivity. Aim: Systematically review the available published works of literature to identify all relevant studies for inclusion and to determine any evidence demonstrating that application of 10% & 15% carbamide peroxide in tooth whitening procedures resulted in tooth sensitivity. Material and Methods: Following a review of 70 relevant papers from searching both electronic databases (OVID MEDLINE and PUBMED) and hand searching of relevant written journals, 49 studies were identified, 42 papers were subsequently excluded, and 7 studies were finally accepted for inclusion. The extraction of data for inclusion was conducted by two reviewers. The main outcome measures were the methodology and assessment used by investigators to evaluate tooth sensitivity in tooth whitening studies. Results: The reported evaluation of tooth sensitivity during tooth whitening procedures was based on the subjective response of subjects rather than a recognized methodology for evaluating. One of the problems in evaluating was the lack of homogeneity in study design. Seven studies were included. The studies included essential features namely: randomized group, placebo controls, doubleblind and single-blind. Drop-out was obtained from two of included studies. Three of the included studies reported sensitivity at the baseline visit. Two of the included studies mentioned the exclusion criteria Conclusions: The results were inconclusive due to: Limited number of included studies, the study methodology, and evaluation of DS reported. Tooth whitening procedures adversely affect both hard and soft tissues in the oral cavity. Sideeffects are mild and transient in nature. Whitening solutions with greater than 10% carbamide peroxide causes more tooth sensitivity. Studies using nightguard vital bleaching with 10% carbamide peroxide reported two side effects tooth sensitivity and gingival irritation, although tooth sensitivity was more prevalent than gingival irritation

Keywords: dentine, sensitivity, bleaching, carbamide peroxde

Procedia PDF Downloads 69