Search results for: neo-liberal higher education
1726 An Experimental Determination of the Limiting Factors Governing the Operation of High-Hydrogen Blends in Domestic Appliances Designed to Burn Natural Gas
Authors: Haiqin Zhou, Robin Irons
Abstract:
The introduction of hydrogen into local networks may, in many cases, require the initial operation of those systems on natural gas/hydrogen blends, either because of a lack of sufficient hydrogen to allow a 100% conversion or because existing infrastructure imposes limitations on the % hydrogen that can be burned before the end-use technologies are replaced. In many systems, the largest number of end-use technologies are small-scale but numerous appliances used for domestic and industrial heating and cooking. In such a scenario, it is important to understand exactly how much hydrogen can be introduced into these appliances before their performance becomes unacceptable and what imposes that limitation. This study seeks to explore a range of significantly higher hydrogen blends and a broad range of factors that might limit operability or environmental acceptability. We will present tests from a burner designed for space heating and optimized for natural gas as an increasing % of hydrogen blends (increasing from 25%) were burned and explore the range of parameters that might govern the acceptability of operation. These include gaseous emissions (particularly NOx and unburned carbon), temperature, flame length, stability and general operational acceptability. Results will show emissions, Temperature, and flame length as a function of thermal load and percentage of hydrogen in the blend. The relevant application and regulation will ultimately determine the acceptability of these values, so it is important to understand the full operational envelope of the burners in question through the sort of extensive parametric testing we have carried out. The present dataset should represent a useful data source for designers interested in exploring appliance operability. In addition to this, we present data on two factors that may be absolutes in determining allowable hydrogen percentages. The first of these is flame blowback. Our results show that, for our system, the threshold between acceptable and unacceptable performance lies between 60 and 65% mol% hydrogen. Another factor that may limit operation, and which would be important in domestic applications, is the acoustic performance of these burners. We will describe a range of operational conditions in which hydrogen blend burners produce a loud and invasive ‘screech’. It will be important for equipment designers and users to find ways to avoid this or mitigate it if performance is to be deemed acceptable.Keywords: blends, operational, domestic appliances, future system operation.
Procedia PDF Downloads 241725 Handling, Exporting and Archiving Automated Mineralogy Data Using TESCAN TIMA
Authors: Marek Dosbaba
Abstract:
Within the mining sector, SEM-based Automated Mineralogy (AM) has been the standard application for quickly and efficiently handling mineral processing tasks. Over the last decade, the trend has been to analyze larger numbers of samples, often with a higher level of detail. This has necessitated a shift from interactive sample analysis performed by an operator using a SEM, to an increased reliance on offline processing to analyze and report the data. In response to this trend, TESCAN TIMA Mineral Analyzer is designed to quickly create a virtual copy of the studied samples, thereby preserving all the necessary information. Depending on the selected data acquisition mode, TESCAN TIMA can perform hyperspectral mapping and save an X-ray spectrum for each pixel or segment, respectively. This approach allows the user to browse through elemental distribution maps of all elements detectable by means of energy dispersive spectroscopy. Re-evaluation of the existing data for the presence of previously unconsidered elements is possible without the need to repeat the analysis. Additional tiers of data such as a secondary electron or cathodoluminescence images can also be recorded. To take full advantage of these information-rich datasets, TIMA utilizes a new archiving tool introduced by TESCAN. The dataset size can be reduced for long-term storage and all information can be recovered on-demand in case of renewed interest. TESCAN TIMA is optimized for network storage of its datasets because of the larger data storage capacity of servers compared to local drives, which also allows multiple users to access the data remotely. This goes hand in hand with the support of remote control for the entire data acquisition process. TESCAN also brings a newly extended open-source data format that allows other applications to extract, process and report AM data. This offers the ability to link TIMA data to large databases feeding plant performance dashboards or geometallurgical models. The traditional tabular particle-by-particle or grain-by-grain export process is preserved and can be customized with scripts to include user-defined particle/grain properties.Keywords: Tescan, electron microscopy, mineralogy, SEM, automated mineralogy, database, TESCAN TIMA, open format, archiving, big data
Procedia PDF Downloads 1101724 The Effect of Different Strength Training Methods on Muscle Strength, Body Composition and Factors Affecting Endurance Performance
Authors: Shaher A. I. Shalfawi, Fredrik Hviding, Bjornar Kjellstadli
Abstract:
The main purpose of this study was to measure the effect of two different strength training methods on muscle strength, muscle mass, fat mass and endurance factors. Fourteen physical education students accepted to participate in this study. The participants were then randomly divided into three groups, traditional training group (TTG), cluster training group (CTG) and control group (CG). TTG consisted of 4 participants aged ( ± SD) (22.3 ± 1.5 years), body mass (79.2 ± 15.4 kg) and height (178.3 ± 11.9 cm). CTG consisted of 5 participants aged (22.2 ± 3.5 years), body mass (81.0 ± 24.0 kg) and height (180.2 ± 12.3 cm). CG consisted of 5 participants aged (22 ± 2.8 years), body mass (77 ± 19 kg) and height (174 ± 6.7 cm). The participants underwent a hypertrophy strength training program twice a week consisting of 4 sets of 10 reps at 70% of one-repetition maximum (1RM), using barbell squat and barbell bench press for 8 weeks. The CTG performed 2 x 5 reps using 10 s recovery in between repetitions and 50 s recovery between sets, while TTG performed 4 sets of 10 reps with 90 s recovery in between sets. Pre- and post-tests were administrated to assess body composition (weight, muscle mass, and fat mass), 1RM (bench press and barbell squat) and a laboratory endurance test (Bruce Protocol). Instruments used to collect the data were Tanita BC-601 scale (Tanita, Illinois, USA), Woodway treadmill (Woodway, Wisconsin, USA) and Vyntus CPX breath-to-breath system (Jaeger, Hoechberg, Germany). Analysis was conducted at all measured variables including time to peak VO2, peak VO2, heart rate (HR) at peak VO2, respiratory exchange ratio (RER) at peak VO2, and number of breaths per minute. The results indicate an increase in 1RM performance after 8 weeks of training. The change in 1RM squat was for the TTG = 30 ± 3.8 kg, CTG = 28.6 ± 8.3 kg and CG = 10.3 ± 13.8 kg. Similarly, the change in 1RM bench press was for the TTG = 9.8 ± 2.8 kg, CTG = 7.4 ± 3.4 kg and CG = 4.4 ± 3.4 kg. The within-group analysis from the oxygen consumption measured during the incremental exercise indicated that the TTG had only a statistical significant increase in their RER from 1.16 ± 0.04 to 1.23 ± 0.05 (P < 0.05). The CTG had a statistical significant improvement in their HR at peak VO2 from 186 ± 24 to 191 ± 12 Beats Per Minute (P < 0.05) and their RER at peak VO2 from 1.11 ± 0.06 to 1.18 ±0.05 (P < 0.05). Finally, the CG had only a statistical significant increase in their RER at peak VO2 from 1.11 ± 0.07 to 1.21 ± 0.05 (P < 0.05). The between-group analysis showed no statistical differences between all groups in all the measured variables from the oxygen consumption test during the incremental exercise including changes in muscle mass, fat mass, and weight (kg). The results indicate a similar effect of hypertrophy strength training irrespective of the methods of the training used on untrained subjects. Because there were no notable changes in body-composition measures, the results suggest that the improvements in performance observed in all groups is most probably due to neuro-muscular adaptation to training.Keywords: hypertrophy strength training, cluster set, Bruce protocol, peak VO2
Procedia PDF Downloads 2501723 Cr (VI) Adsorption on Ce0.25Zr0.75O2.nH2O-Kinetics and Thermodynamics
Authors: Carlos Alberto Rivera-corredor, Angie Dayana Vargas-Ceballos, Edison Gilpavas, Izabela Dobrosz-Gómez, Miguel Ángel Gómez-García
Abstract:
Hexavalent chromium, Cr (VI) is present in the effluents from different industries such as electroplating, mining, leather tanning, etc. This compound is of great academic and industrial concern because of its toxic and carcinogenic behavior. Its dumping to both environmental and public health for animals and humans causes serious problems in water sources. The amount of Cr (VI) in industrial wastewaters ranges from 0.5 to 270,000 mgL-1. According to the Colombian standard for water quality (NTC-813-2010), the maximum allowed concentration for the Cr (VI) in drinking water is 0.05 mg L-1. To comply with this limit, it is essential that industries treat their effluent to reduce the Cr (VI) to acceptable levels. Numerous methods have been reported for the treatment removing metal ions from aqueous solutions such as: reduction, ion exchange, electrodialysis, etc. Adsorption has become a promising method for the purification of metal ions in water, since its application corresponds with an economic and efficient technology. The absorbent selection and the kinetic and thermodynamic study of the adsorption conditions are key to the development of a suitable adsorption technology. The Ce0.25Zr0.75O2.nH2O presents higher adsorption capacity between a series of hydrated mixed oxides Ce1-xZrxO2 (x = 0, 0.25, 0.5, 0.75, 1). This work presents the kinetic and thermodynamic study of Cr (VI) adsorption on Ce0.25Zr0.75O2.nH2O. Experiments were performed under the following experimental conditions: initial Cr (VI) concentration = 25, 50 and 100 mgL-1, pH = 2, adsorbent charge = 4 gL-1, stirring time = 60 min, temperature=20, 28 and 40 °C. The Cr (VI) concentration was spectrophotometrically estimated by the method of difenilcarbazide with monitoring the absorbance at 540 nm. The Cr (VI) adsorption over hydrated Ce0.25Zr0.75O2.nH2O models was analyzed using pseudo-first and pseudo-second order kinetics. The Langmuir and Freundlich models were used to model the experimental data. The convergence between the experimental values and those predicted by the model, is expressed as a linear regression correlation coefficient (R2) and was employed as the model selection criterion. The adsorption process followed the pseudo-second order kinetic model and obeyed the Langmuir isotherm model. The thermodynamic parameters were calculated as: ΔH°=9.04 kJmol-1,ΔS°=0.03 kJmol-1 K-1, ΔG°=-0.35 kJmol-1 and indicated the endothermic and spontaneous nature of the adsorption process, governed by physisorption interactions.Keywords: adsorption, hexavalent chromium, kinetics, thermodynamics
Procedia PDF Downloads 3001722 Towards a Strategic Framework for State-Level Epistemological Functions
Authors: Mark Darius Juszczak
Abstract:
While epistemology, as a sub-field of philosophy, is generally concerned with theoretical questions about the nature of knowledge, the explosion in digital media technologies has resulted in an exponential increase in the storage and transmission of human information. That increase has resulted in a particular non-linear dynamic – digital epistemological functions are radically altering how and what we know. Neither the rate of that change nor the consequences of it have been well studied or taken into account in developing state-level strategies for epistemological functions. At the current time, US Federal policy, like that of virtually all other countries, maintains, at the national state level, clearly defined boundaries between various epistemological agencies - agencies that, in one way or another, mediate the functional use of knowledge. These agencies can take the form of patent and trademark offices, national library and archive systems, departments of education, departments such as the FTC, university systems and regulations, military research systems such as DARPA, federal scientific research agencies, medical and pharmaceutical accreditation agencies, federal funding for scientific research and legislative committees and subcommittees that attempt to alter the laws that govern epistemological functions. All of these agencies are in the constant process of creating, analyzing, and regulating knowledge. Those processes are, at the most general level, epistemological functions – they act upon and define what knowledge is. At the same time, however, there are no high-level strategic epistemological directives or frameworks that define those functions. The only time in US history where a proxy state-level epistemological strategy existed was between 1961 and 1969 when the Kennedy Administration committed the United States to the Apollo program. While that program had a singular technical objective as its outcome, that objective was so technologically advanced for its day and so complex so that it required a massive redirection of state-level epistemological functions – in essence, a broad and diverse set of state-level agencies suddenly found themselves working together towards a common epistemological goal. This paper does not call for a repeat of the Apollo program. Rather, its purpose is to investigate the minimum structural requirements for a national state-level epistemological strategy in the United States. In addition, this paper also seeks to analyze how the epistemological work of the multitude of national agencies within the United States would be affected by such a high-level framework. This paper is an exploratory study of this type of framework. The primary hypothesis of the author is that such a function is possible but would require extensive re-framing and reclassification of traditional epistemological functions at the respective agency level. In much the same way that, for example, DHS (Department of Homeland Security) evolved to respond to a new type of security threat in the world for the United States, it is theorized that a lack of coordination and alignment in epistemological functions will equally result in a strategic threat to the United States.Keywords: strategic security, epistemological functions, epistemological agencies, Apollo program
Procedia PDF Downloads 771721 Bending the Consciousnesses: Uncovering Environmental Issues Through Circuit Bending
Authors: Enrico Dorigatti
Abstract:
The growing pile of hazardous e-waste produced especially by those developed and wealthy countries gets relentlessly bigger, composed of the EEDs (Electric and Electronic Device) that are often thrown away although still well functioning, mainly due to (programmed) obsolescence. As a consequence, e-waste has taken, over the last years, the shape of a frightful, uncontrollable, and unstoppable phenomenon, mainly fuelled by market policies aiming to maximize sales—and thus profits—at any cost. Against it, governments and organizations put some efforts in developing ambitious frameworks and policies aiming to regulate, in some cases, the whole lifecycle of EEDs—from the design to the recycling. Incidentally, however, such regulations sometimes make the disposal of the devices economically unprofitable, which often translates into growing illegal e-waste trafficking—an activity usually undertaken by criminal organizations. It seems that nothing, at least in the near future, can stop the phenomenon of e-waste production and accumulation. But while, from a practical standpoint, a solution seems hard to find, much can be done regarding people's education, which translates into informing and promoting good practices such as reusing and repurposing. This research argues that circuit bending—an activity rooted in neo-materialist philosophy and post-digital aesthetic, and based on repurposing EEDs into novel music instruments and sound generators—could have a great potential in this. In particular, it asserts that circuit bending could expose ecological, environmental, and social criticalities related to the current market policies and economic model. Not only thanks to its practical side (e.g., sourcing and repurposing devices) but also to the artistic one (e.g., employing bent instruments for ecological-aware installations, performances). Currently, relevant literature and debate lack interest and information about the ecological aspects and implications of the practical and artistic sides of circuit bending. This research, therefore, although still at an early stage, aims to fill in this gap by investigating, on the one side, the ecologic potential of circuit bending and, on the other side, its capacity of sensitizing people, through artistic practice, about e-waste-related issues. The methodology will articulate in three main steps. Firstly, field research will be undertaken—with the purpose of understanding where and how to source, in an ecologic and sustainable way, (discarded) EEDs for circuit bending. Secondly, artistic installations and performances will be organized—to sensitize the audience about environmental concerns through sound art and music derived from bent instruments. Data, such as audiences' feedback, will be collected at this stage. The last step will consist in realising workshops to spread an ecologically-aware circuit bending practice. Additionally, all the data and findings collected will be made available and disseminated as resources.Keywords: circuit bending, ecology, sound art, sustainability
Procedia PDF Downloads 1711720 Assessment of Genetic Variability of Potato Genotypes for Proline Under Salt Stress Conditions
Authors: Elchin Hajiyev, Afet Memmedova Dadash, Sabina Hajiyeva, Aynur Karimova, Ramiz Aliyev
Abstract:
Although potatoes have a wide distribution range, the yield potential of varieties varies greatly depending on the region. Our country is made up of agricultural regions with very different environmental characteristics.In this case, we cannot expect the introduced varieties to show the same adaptation to the different conditions of our country. For this reason, in our country, varieties with high general adaptability should be used, rather than varieties with special adaptability in certain areas. Soil salinization has become a global problem.Increased salinity has a serious impact on food security by reducing plant productivity. Plants have protective mechanisms of adaptation to salt stress, such as the synthesis of physiologically active substances, resistance to antioxidant stress and oxidation of membrane lipids. One of these substances is free proline. Our study revealed genetic variation in proline accumulation among samples exposed to stress factors.Changes in proline content under stress conditions were studied in 50 samples. There was wide variation across all treatments.The amount of proline varied between 7.2–37.7 μM/g under salinity conditions.The lowest rate was in the SF33 genotype (1.5 times more than the control (2.5 μM/g)).The highest level of proline under the influence of salt stress was in the SF45 genotype (7.25 times higher than the control (32.5 μM/g)). Our studies have found that the protective system reacts differently to the influence of stress factors. According to the results obtained on the amount of proline, adaptation mechanisms must be more actively activated to maintain metabolism and ensure viability in sensitive forms under the influence of stress factors. At high doses of the salt stressor, a tenfold increase in proline compared to the control indicates significant damage to the plant organism as a result of stress.To prevent damage to the body, the antioxidant system needs to quickly mobilize and work at full capacity in adverse conditions. An increase in the dose of the stress factor salt in our study caused a greater increase in the amount of free proline in plant tissues. Considering the functions of proline as an osmoprotector and antioxidant, it was found that increasing its amount is aimed at protecting the plant from the acute effects of stressors.Keywords: genetic variability, potato, genotypes, proline, stress
Procedia PDF Downloads 491719 A Diagnostic Accuracy Study: Comparison of Two Different Molecular-Based Tests (Genotype HelicoDR and Seeplex Clar-H. pylori ACE Detection), in the Diagnosis of Helicobacter pylori Infections
Authors: Recep Kesli, Huseyin Bilgin, Yasar Unlu, Gokhan Gungor
Abstract:
Aim: The aim of this study was to compare diagnostic values of two different molecular-based tests (GenoType® HelicoDR ve Seeplex® H. pylori-ClaR- ACE Detection) in detection presence of the H. pylori from gastric biopsy specimens. In addition to this also was aimed to determine resistance ratios of H. pylori strains against to clarytromycine and quinolone isolated from gastric biopsy material cultures by using both the genotypic (GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection) and phenotypic (gradient strip, E-test) methods. Material and methods: A total of 266 patients who admitted to Konya Education and Research Hospital Department of Gastroenterology with dyspeptic complaints, between January 2011-June 2013, were included in the study. Microbiological and histopathological examinations of biopsy specimens taken from antrum and corpus regions were performed. The presence of H. pylori in all the biopsy samples was investigated by five differnt dignostic methods together: culture (C) (Portagerm pylori-PORT PYL, Pylori agar-PYL, GENbox microaer, bioMerieux, France), histology (H) (Giemsa, Hematoxylin and Eosin staining), rapid urease test (RUT) (CLOtest, Cimberly-Clark, USA), and two different molecular tests; GenoType® HelicoDR, Hain, Germany, based on DNA strip assay, and Seeplex ® H. pylori -ClaR- ACE Detection, Seegene, South Korea, based on multiplex PCR. Antimicrobial resistance of H. pylori isolates against clarithromycin and levofloxacin was determined by GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, and gradient strip (E-test, bioMerieux, France) methods. Culture positivity alone or positivities of both histology and RUT together was accepted as the gold standard for H. pylori positivity. Sensitivity and specificity rates of two molecular methods used in the study were calculated by taking the two gold standards previously mentioned. Results: A total of 266 patients between 16-83 years old who 144 (54.1 %) were female, 122 (45.9 %) were male were included in the study. 144 patients were found as culture positive, and 157 were H and RUT were positive together. 179 patients were found as positive with GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection together. Sensitivity and specificity rates of studied five different methods were found as follows: C were 80.9 % and 84.4 %, H + RUT were 88.2 % and 75.4 %, GenoType® HelicoDR were 100 % and 71.3 %, and Seeplex ® H. pylori -ClaR- ACE Detection were, 100 % and 71.3 %. A strong correlation was found between C and H+RUT, C and GenoType® HelicoDR, and C and Seeplex ® H. pylori -ClaR- ACE Detection (r:0.644 and p:0.000, r:0.757 and p:0.000, r:0.757 and p:0.000, respectively). Of all the isolated 144 H. pylori strains 24 (16.6 %) were detected as resistant to claritromycine, and 18 (12.5 %) were levofloxacin. Genotypic claritromycine resistance was detected only in 15 cases with GenoType® HelicoDR, and 6 cases with Seeplex ® H. pylori -ClaR- ACE Detection. Conclusion: In our study, it was concluded that; GenoType® HelicoDR and Seeplex ® H. pylori -ClaR- ACE Detection was found as the most sensitive diagnostic methods when comparing all the investigated other ones (C, H, and RUT).Keywords: Helicobacter pylori, GenoType® HelicoDR, Seeplex ® H. pylori -ClaR- ACE Detection, antimicrobial resistance
Procedia PDF Downloads 1681718 Effect of a Synthetic Platinum-Based Complex on Autophagy Induction in Leydig TM3 Cells
Authors: Ezzati Givi M., Hoveizi E., Nezhad Marani N.
Abstract:
Platinum-based anticancer therapeutics are the most widely used drugs in clinical chemotherapy but have major limitations and various side effects in clinical applications. Gonadotoxicity and sterility is one of the most common complications for cancer survivors, which seem to be drug-specific and dose-related. Therefore, many efforts have been dedicated to discovering a new structure of platinum-based anticancer agents with improved therapeutic index, fewer side effects. In this regard, new Pt(II)-phosphane complexes containing heterocyclic thionate ligands (PCTL) have been synthesized, which show more potent antitumor activities in comparison to cisplatin. Cisplatin, the best leading metal-based antitumor drug in the field, induces testicular toxicity on Leydig and Sertoli cells leading to serious side effects such as azoospermia and infertility. Therefore in the present study, we aimed to investigate the cytotoxicity effect of PCTL on mice TM4 Sertoli cells with particular emphasis on the role of autophagy in comparison to cisplatin. In this study, an MTT assay was performed to evaluate the IC50 of PCTL and to analyze the TM3 Leydig cell's viability. Cells morphology was evaluated via invert microscope and Changing in morphology for nuclei swelling or autophagic vacuoles formation were assessed by DAPI and MDC staining. Testosterone production in the culture medium was measured using an ELISA kit. Finally, the expression of Autophagy-related genes, Atg5, Beclin1 and p62, were analyzed by qPCR. Based on the obtained results by MTT, the IC50 value of PCTL was 50 μM in TM3 cells and cytotoxic effects was in a dose- and time-dependent manner. Cells morphological changes investigated by inverted microscopy, DAPI, and MDC staining which showed the cytotoxic concentrations of PCTL was significantly higher than cisplatin in the treated TM3 Leydig cells. The results of PCR showed a lack of expression of the p62, Atg5 and Beclin1 gene in TM3 cells treated with PCTL in comparison to cisplatin and control groups. It should be noted that the effects of 25 μM PCTL concentration on TM3 cells have been associated with increased testosterone production and secretion, which requires further study to explain the possible causes and involved molecular mechanisms. The results of the study showed that the PCTL had less-lethal effects on TM3 cells in comparison to cisplatin and probably did not induce autophagy in TM3 cells.Keywords: platinum-based anticancer agents, cisplatin, Leydig TM3 cells, autophagy
Procedia PDF Downloads 1281717 Design and Assessment of Base Isolated Structures under Spectrum-Compatible Bidirectional Earthquakes
Authors: Marco Furinghetti, Alberto Pavese, Michele Rinaldi
Abstract:
Concave Surface Slider devices have been more and more used in real applications for seismic protection of both bridge and building structures. Several research activities have been carried out, in order to investigate the lateral response of such a typology of devices, and a reasonably high level of knowledge has been reached. If radial analysis is performed, the frictional force is always aligned with respect to the restoring force, whereas under bidirectional seismic events, a bi-axial interaction of the directions of motion occurs, due to the step-wise projection of the main frictional force, which is assumed to be aligned to the trajectory of the isolator. Nonetheless, if non-linear time history analyses have to be performed, standard codes provide precise rules for the definition of an averagely spectrum-compatible set of accelerograms in radial conditions, whereas for bidirectional motions different combinations of the single components spectra can be found. Moreover, nowadays software for the adjustment of natural accelerograms are available, which lead to a higher quality of spectrum-compatibility and to a smaller dispersion of results for radial motions. In this endeavor a simplified design procedure is defined, for building structures, base-isolated by means of Concave Surface Slider devices. Different case study structures have been analyzed. In a first stage, the capacity curve has been computed, by means of non-linear static analyses on the fixed-base structures: inelastic fiber elements have been adopted and different direction angles of lateral forces have been studied. Thanks to these results, a linear elastic Finite Element Model has been defined, characterized by the same global stiffness of the linear elastic branch of the non-linear capacity curve. Then, non-linear time history analyses have been performed on the base-isolated structures, by applying seven bidirectional seismic events. The spectrum-compatibility of bidirectional earthquakes has been studied, by considering different combinations of single components and adjusting single records: thanks to the proposed procedure, results have shown a small dispersion and a good agreement in comparison to the assumed design values.Keywords: concave surface slider, spectrum-compatibility, bidirectional earthquake, base isolation
Procedia PDF Downloads 2921716 Effect of Cutting Tools and Working Conditions on the Machinability of Ti-6Al-4V Using Vegetable Oil-Based Cutting Fluids
Authors: S. Gariani, I. Shyha
Abstract:
Cutting titanium alloys are usually accompanied with low productivity, poor surface quality, short tool life and high machining costs. This is due to the excessive generation of heat at the cutting zone and difficulties in heat dissipation due to relatively low heat conductivity of this metal. The cooling applications in machining processes are crucial as many operations cannot be performed efficiently without cooling. Improving machinability, increasing productivity, enhancing surface integrity and part accuracy are the main advantages of cutting fluids. Conventional fluids such as mineral oil-based, synthetic and semi-synthetic are the most common cutting fluids in the machining industry. Although, these cutting fluids are beneficial in the industries, they pose a great threat to human health and ecosystem. Vegetable oils (VOs) are being investigated as a potential source of environmentally favourable lubricants, due to a combination of biodegradability, good lubricous properties, low toxicity, high flash points, low volatility, high viscosity indices and thermal stability. Fatty acids of vegetable oils are known to provide thick, strong, and durable lubricant films. These strong lubricating films give the vegetable oil base stock a greater capability to absorb pressure and high load carrying capacity. This paper details preliminary experimental results when turning Ti-6Al-4V. The impact of various VO-based cutting fluids, cutting tool materials, working conditions was investigated. The full factorial experimental design was employed involving 24 tests to evaluate the influence of process variables on average surface roughness (Ra), tool wear and chip formation. In general, Ra varied between 0.5 and 1.56 µm and Vasco1000 cutting fluid presented comparable performance with other fluids in terms of surface roughness while uncoated coarse grain WC carbide tool achieved lower flank wear at all cutting speeds. On the other hand, all tools tips were subjected to uniform flank wear during whole cutting trails. Additionally, formed chip thickness ranged between 0.1 and 0.14 mm with a noticeable decrease in chip size when higher cutting speed was used.Keywords: cutting fluids, turning, Ti-6Al-4V, vegetable oils, working conditions
Procedia PDF Downloads 2791715 From Preoccupied Attachment Pattern to Depression: Serial Mediation Model on the Female Sample
Authors: Tatjana Stefanovic Stanojevic, Milica Tosic Radev, Aleksandra Bogdanovic
Abstract:
Depression is considered to be a leading cause of death and disability in the female population, and that is the reason why understanding the dynamics of the onset of depressive symptomatology is important. A review of the literature indicates the relationship between depressive symptoms and insecure attachment patterns, but very few studies have examined the mechanism underlying this relation. The aim of the study was to examine the pathway from the preoccupied attachment pattern to depressive symptomatology, as well as to test the mediation effect of mentalization, social anxiety and rumination in this relationship using a serial mediation model. The research was carried out on a geographical cluster sample from the general population of Serbia included within the project ‘Indicators and models of family and work roles harmonization’ funded by the Ministry of Education, Science and Technological Development of the Republic of Serbia. This research was carried out on a subsample of 791 working-age female adults from 37 urban and rural locations distributed through 20 administrative districts of Serbia. The respondents filled in a battery of instruments, including Relationship Questionnaire - Clinical Version (RQ - CV), The Mentalization Scale (MentS), Scale of Social Anxiety (SA), Patient Ruminative Thought Style Questionnaire (RTSQ), Health Questionnaire (PHQ-9). The results confirm our assumption that the total indirect effect of the preoccupied attachment pattern to depressive symptoms is significant across all mediators separately. More importantly, this effect is still present in a model with a sequential mediator relationship, where social anxiety, rumination, and mentalization were perceived as serial mediators of a relationship between preoccupied attachment and depressive symptoms (estimated indirect effect=0.004, boot-strapped 95% CI=0.002 to 0.007). Our findings suggest that there is a significant specific indirect effect of the preoccupied attachment pattern to depressive symptoms, occurring through mentalization, social anxiety and rumination, indicating that preoccupied attachment cause decrease of a self related mentalization, which in turn causes increasing of social anxiety and rumination, concluding in depressive symptoms as a final consequence. The finding that the path from the preoccupied attachment pattern to depressive symptoms is typical in women is understandable from the perspective of both evolutionary and culturally conditioned gender differences. The practical implications of the study are reflected in the recommendations for the prevention and forehand psychotherapy response among preoccupied women with depressive symptomatology. Treatment of this specific group of depressed patients should be focused on strengthening mentalization, learning to accept and to understand herself better, reducing anxiety in situations where mistakes are visible to others, and replacing the rumination strategy with more constructive coping strategies.Keywords: preoccupied attachment, depression, serial mediation model, mentalization, rumination
Procedia PDF Downloads 1441714 Evaluation of Air Movement, Humidity and Temperature Perceptions with the Occupant Satisfaction in Office Buildings in Hot and Humid Climate Regions by Means of Field Surveys
Authors: Diego S. Caetano, Doreen E. Kalz, Louise L. B. Lomardo, Luiz P. Rosa
Abstract:
The energy consumption in non-residential buildings in Brazil has a great impact on the national infrastructure. The growth of the energy consumption has a special role over the building cooling systems, supported by the increased people's requirements on hygrothermal comfort. This paper presents how the occupants of office buildings notice and evaluate the hygrothermic comfort regarding temperature, humidity, and air movement, considering the cooling systems presented at the buildings studied, analyzed by real occupants in areas of hot and humid climate. The paper presents results collected over a long time from 3 office buildings in the cities of Rio de Janeiro and Niteroi (Brazil) in 2015 and 2016, from daily questionnaires with eight questions answered by 114 people between 3 to 5 weeks per building, twice a day (10 a.m. and 3 p.m.). The paper analyses 6 out of 8 questions, emphasizing on the perception of temperature, humidity, and air movement. Statistics analyses were made crossing participant answers and humidity and temperature data related to time high time resolution time. Analyses were made from regressions comparing: internal and external temperature, and then compared with the answers of the participants. The results were put in graphics combining statistic graphics related to temperature and air humidity with the answers of the real occupants. Analysis related to the perception of the participants to humidity and air movements were also analyzed. The hygrothermal comfort statistic model of the European standard DIN EN 15251 and that from the Brazilian standard NBR 16401 were compared taking into account the perceptions of the hygrothermal comfort of the participants, with emphasis on air humidity, taking basis on prior studies published on this same research. The studies point out a relative tolerance for higher temperatures than the ones determined by the standards, besides a variation on the participants' perception concerning air humidity. The paper presents a group of detailed information that permits to improve the quality of the buildings based on the perception of occupants of the office buildings, contributing to the energy reduction without health damages and demands of necessary hygrothermal comfort, reducing the consumption of electricity on cooling.Keywords: thermal comfort, energy consumption, energy standards, comfort models
Procedia PDF Downloads 3231713 Comparison of Two Strategies in Thoracoscopic Ablation of Atrial Fibrillation
Authors: Alexander Zotov, Ilkin Osmanov, Emil Sakharov, Oleg Shelest, Aleksander Troitskiy, Robert Khabazov
Abstract:
Objective: Thoracoscopic surgical ablation of atrial fibrillation (AF) includes two technologies in performing of operation. 1st strategy used is the AtriCure device (bipolar, nonirrigated, non clamping), 2nd strategy is- the Medtronic device (bipolar, irrigated, clamping). The study presents a comparative analysis of clinical outcomes of two strategies in thoracoscopic ablation of AF using AtriCure vs. Medtronic devices. Methods: In 2 center study, 123 patients underwent thoracoscopic ablation of AF for the period from 2016 to 2020. Patients were divided into two groups. The first group is represented by patients who applied the AtriCure device (N=63), and the second group is - the Medtronic device (N=60), respectively. Patients were comparable in age, gender, and initial severity of the condition. Among the patients, in group 1 were 65% males with a median age of 57 years, while in group 2 – 75% and 60 years, respectively. Group 1 included patients with paroxysmal form -14,3%, persistent form - 68,3%, long-standing persistent form – 17,5%, group 2 – 13,3%, 13,3% and 73,3% respectively. Median ejection fraction and indexed left atrial volume amounted in group 1 – 63% and 40,6 ml/m2, in group 2 - 56% and 40,5 ml/m2. In addition, group 1 consisted of 39,7% patients with chronic heart failure (NYHA Class II) and 4,8% with chronic heart failure (NYHA Class III), when in group 2 – 45% and 6,7%, respectively. Follow-up consisted of laboratory tests, chest Х-ray, ECG, 24-hour Holter monitor, and cardiopulmonary exercise test. Duration of freedom from AF, distant mortality rate, and prevalence of cerebrovascular events were compared between the two groups. Results: Exit block was achieved in all patients. According to the Clavien-Dindo classification of surgical complications fraction of adverse events was 14,3% and 16,7% (1st group and 2nd group, respectively). Mean follow-up period in the 1st group was 50,4 (31,8; 64,8) months, in 2nd group - 30,5 (14,1; 37,5) months (P=0,0001). In group 1 - total freedom of AF was in 73,3% of patients, among which 25% had additional antiarrhythmic drugs (AADs) therapy or catheter ablation (CA), in group 2 – 90% and 18,3%, respectively (for total freedom of AF P<0,02). At follow-up, the distant mortality rate in the 1st group was – 4,8%, and in the 2nd – no fatal events. Prevalence of cerebrovascular events was higher in the 1st group than in the 2nd (6,7% vs. 1,7% respectively). Conclusions: Despite the relatively shorter follow-up of the 2nd group in the study, applying the strategy using the Medtronic device showed quite encouraging results. Further research is needed to evaluate the effectiveness of this strategy in the long-term period.Keywords: atrial fibrillation, clamping, ablation, thoracoscopic surgery
Procedia PDF Downloads 1101712 Segmented Pupil Phasing with Deep Learning
Authors: Dumont Maxime, Correia Carlos, Sauvage Jean-François, Schwartz Noah, Gray Morgan
Abstract:
Context: The concept of the segmented telescope is unavoidable to build extremely large telescopes (ELT) in the quest for spatial resolution, but it also allows one to fit a large telescope within a reduced volume of space (JWST) or into an even smaller volume (Standard Cubesat). Cubesats have tight constraints on the computational burden available and the small payload volume allowed. At the same time, they undergo thermal gradients leading to large and evolving optical aberrations. The pupil segmentation comes nevertheless with an obvious difficulty: to co-phase the different segments. The CubeSat constraints prevent the use of a dedicated wavefront sensor (WFS), making the focal-plane images acquired by the science detector the most practical alternative. Yet, one of the challenges for the wavefront sensing is the non-linearity between the image intensity and the phase aberrations. Plus, for Earth observation, the object is unknown and unrepeatable. Recently, several studies have suggested Neural Networks (NN) for wavefront sensing; especially convolutional NN, which are well known for being non-linear and image-friendly problem solvers. Aims: We study in this paper the prospect of using NN to measure the phasing aberrations of a segmented pupil from the focal-plane image directly without a dedicated wavefront sensing. Methods: In our application, we take the case of a deployable telescope fitting in a CubeSat for Earth observations which triples the aperture size (compared to the 10cm CubeSat standard) and therefore triples the angular resolution capacity. In order to reach the diffraction-limited regime in the visible wavelength, typically, a wavefront error below lambda/50 is required. The telescope focal-plane detector, used for imaging, will be used as a wavefront-sensor. In this work, we study a point source, i.e. the Point Spread Function [PSF] of the optical system as an input of a VGG-net neural network, an architecture designed for image regression/classification. Results: This approach shows some promising results (about 2nm RMS, which is sub lambda/50 of residual WFE with 40-100nm RMS of input WFE) using a relatively fast computational time less than 30 ms which translates a small computation burder. These results allow one further study for higher aberrations and noise.Keywords: wavefront sensing, deep learning, deployable telescope, space telescope
Procedia PDF Downloads 1051711 Detection and Identification of Antibiotic Resistant Bacteria Using Infra-Red-Microscopy and Advanced Multivariate Analysis
Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel
Abstract:
Antimicrobial drugs have an important role in controlling illness associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global health-care problem. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing like disk diffusion are time-consuming and other method including E-test, genotyping are relatively expensive. Fourier transform infrared (FTIR) microscopy is rapid, safe, and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 550 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 85% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.Keywords: antibiotics, E. coli, FTIR, multivariate analysis, susceptibility
Procedia PDF Downloads 2651710 Kinetic Modelling of Drying Process of Jumbo Squid (Dosidicus Gigas) Slices Subjected to an Osmotic Pretreatment under High Pressure
Authors: Mario Perez-Won, Roberto Lemus-Mondaca, Constanza Olivares-Rivera, Fernanda Marin-Monardez
Abstract:
This research presents the simultaneous application of high hydrostatic pressure (HHP) and osmotic dehydration (DO) as a pretreatment to hot –air drying of jumbo squid (Dosidicus gigas) cubes. The drying time was reduced to 2 hours at 60ºC and 5 hours at 40°C as compared to the jumbo squid samples untreated. This one was due to osmotic pressure under high-pressure treatment where increased salt saturation what caused an increasing water loss. Thus, a more reduced time during convective drying was reached, and so water effective diffusion in drying would play an important role in this research. Different working conditions such as pressure (350-550 MPa), pressure time (5-10 min), salt concentration, NaCl (10 y 15%) and drying temperature (40-60ºC) were optimized according to kinetic parameters of each mathematical model. The models used for drying experimental curves were those corresponding to Weibull, Page and Logarithmic models, however, the latest one was the best fitted to the experimental data. The values for water effective diffusivity varied from 4.82 to 6.59x10-9 m2/s for the 16 curves (DO+HHP) whereas the control samples obtained a value of 1.76 and 5.16×10-9 m2/s, for 40 and 60°C, respectively. On the other hand, quality characteristics such as color, texture, non-enzymatic browning, water holding capacity (WHC) and rehydration capacity (RC) were assessed. The L* (lightness) color parameter increased, however, b * (yellowish) and a* (reddish) parameters decreased for the DO+HHP treated samples, indicating treatment prevents sample browning. The texture parameters such as hardness and elasticity decreased, but chewiness increased with treatment, which resulted in a product with a higher tenderness and less firmness compared to the untreated sample. Finally, WHC and RC values of the most treatments increased owing to a minor damage in tissue cellular compared to untreated samples. Therefore, a knowledge regarding to the drying kinetic as well as quality characteristics of dried jumbo squid samples subjected to a pretreatment of osmotic dehydration under high hydrostatic pressure is extremely important to an industrial level so that the drying process can be successful at different pretreatment conditions and/or variable processes.Keywords: diffusion coefficient, drying process, high pressure, jumbo squid, modelling, quality aspects
Procedia PDF Downloads 2451709 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times
Authors: John Dimopoulos
Abstract:
This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.Keywords: design, hypermodernity, object-oriented ontology, weapon-being
Procedia PDF Downloads 1521708 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods
Authors: Sohyoung Won, Heebal Kim, Dajeong Lim
Abstract:
Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium
Procedia PDF Downloads 1411707 Athlete Coping: Personality Dimensions of Recovery from Injury
Authors: Randall E. Osborne, Seth A. Doty
Abstract:
As participation in organized sports increases, so does the risk of sustaining an athletic injury. These unfortunate injuries result in missed time from practice and, inevitably, the field of competition. Recovery time plays a pivotal role in the overall rehabilitation of the athlete. With time and rehabilitation, an athlete’s physical injury can be properly treated. However, there seem to be few measures assessing psychological recovery from injury. Although an athlete has been cleared to return to play, there may still be lingering doubt about their injury. Overall, there is a vast difference between being physically cleared to play and being psychologically ready to return to play. Certain personality traits might serve as predictors of an individual’s rate of psychological recovery from an injury. The purpose of this research study is to explore the correlations between athletes’ personality and their recovery from an athletic injury, specifically, examining how locus of control has been utilized through other studies and can be beneficial to the current study. Additionally, this section will examine the link between hardiness and coping strategies. In the current study, mental toughness is being tested, but it is important to determine the link between these two concepts. Hardiness and coping strategies are closely related and can play a major role in an athlete’s mental toughness. It is important to examine competitive trait anxiety to illustrate perceived anxiety during athletic competition. The Big 5 and Social Support will also be examined in conjunction with recovery from athletic injury. Athletic injury is a devastating and common occurrence that can happen in any sport. Injured athletes often require resources and treatment to be able to return to the field of play. Athletes become more involved with physical and mental treatment as the length of recovery time increases. It is very reasonable to assume that personality traits would be predictive of athlete recovery from injury. The current study investigated the potential relationship between personality traits and recovery time; more specifically, the personality traits of locus of control, hardiness, social support, competitive trait anxiety, and the “Big 5” personality traits. Results indicated that athletes with a higher internal locus of control tend to report being physically ready to return to play and “ready” to return to play faster than those with an external locus of control. Additionally, Openness to Experience (among the Big 5 personality dimensions) was also related to the speed of return to play.Keywords: athlete, injury, personality, readiness to play, recovery
Procedia PDF Downloads 1481706 Influence of Plant Cover and Redistributing Rainfall on Green Roof Retention and Plant Drought Stress
Authors: Lubaina Soni, Claire Farrell, Christopher Szota, Tim D. Fletcher
Abstract:
Green roofs are a promising engineered ecosystem for reducing stormwater runoff and restoring vegetation cover in cities. Plants can contribute to rainfall retention by rapidly depleting water in the substrate; however, this increases the risk of plant drought stress. Green roof configurations, therefore, need to provide plants the opportunity to efficiently deplete the substrate but also avoid severe drought stress. This study used green roof modules placed in a rainout shelter during a six-month rainfall regime simulated in Melbourne, Australia. Rainfall was applied equally with an overhead irrigation system on each module. Aside from rainfall, modules were under natural climatic conditions, including temperature, wind, and radiation. A single species, Ficinia nodosa, was planted with five different treatments and three replicates of each treatment. In this experiment, we tested the impact of three plant cover treatments (0%, 50% and 100%) on rainfall retention and plant drought stress. We also installed two runoff zone treatments covering 50% of the substrate surface for additional modules with 0% and 50% plant cover to determine whether directing rainfall resources towards plant roots would reduce drought stress without impacting rainfall retention. The retention performance for the simulated rainfall events was measured, quantifying all components for hydrological performance and survival on green roofs. We found that evapotranspiration and rainfall retention were similar for modules with 50% and 100% plant cover. However, modules with 100% plant cover showed significantly higher plant drought stress. Therefore, planting at a lower cover/density reduced plant drought stress without jeopardizing rainfall retention performance. Installing runoff zones marginally reduced evapotranspiration and rainfall retention, but by approximately the same amount for modules with 0% and 50% plant cover. This indicates that reduced evaporation due to the installation of the runoff zones likely contributed to reduced evapotranspiration and rainfall retention. Further, runoff occurred from modules with runoff zones faster than those without, indicating that we created a faster pathway for water to enter and leave the substrate, which also likely contributed to lower overall evapotranspiration and retention. However, despite some loss in retention performance, modules with 50% plant cover installed with runoff zones showed significantly lower drought stress in plants compared to those without runoff zones. Overall, we suggest that reducing plant cover represents a simple means of optimizing green roof performance but creating runoff zones may reduce plant drought stress at the cost of reduced rainfall retention.Keywords: green roof, plant cover, plant drought stress, rainfall retention
Procedia PDF Downloads 1151705 Switching of Series-Parallel Connected Modules in an Array for Partially Shaded Conditions in a Pollution Intensive Area Using High Powered MOSFETs
Authors: Osamede Asowata, Christo Pienaar, Johan Bekker
Abstract:
Photovoltaic (PV) modules may become a trend for future PV systems because of their greater flexibility in distributed system expansion, easier installation due to their nature, and higher system-level energy harnessing capabilities under shaded or PV manufacturing mismatch conditions. This is as compared to the single or multi-string inverters. Novel residential scale PV arrays are commonly connected to the grid by a single DC–AC inverter connected to a series, parallel or series-parallel string of PV panels, or many small DC–AC inverters which connect one or two panels directly to the AC grid. With an increasing worldwide interest in sustainable energy production and use, there is renewed focus on the power electronic converter interface for DC energy sources. Three specific examples of such DC energy sources that will have a role in distributed generation and sustainable energy systems are the photovoltaic (PV) panel, the fuel cell stack, and batteries of various chemistries. A high-efficiency inverter using Metal Oxide Semiconductor Field-Effect Transistors (MOSFETs) for all active switches is presented for a non-isolated photovoltaic and AC-module applications. The proposed configuration features a high efficiency over a wide load range, low ground leakage current and low-output AC-current distortion with no need for split capacitors. The detailed power stage operating principles, pulse width modulation scheme, multilevel bootstrap power supply, and integrated gate drivers for the proposed inverter is described. Experimental results of a hardware prototype, show that not only are MOSFET efficient in the system, it also shows that the ground leakage current issues are alleviated in the proposed inverter and also a 98 % maximum associated driver circuit is achieved. This, in turn, provides the need for a possible photovoltaic panel switching technique. This will help to reduce the effect of cloud movements as well as improve the overall efficiency of the system.Keywords: grid connected photovoltaic (PV), Matlab efficiency simulation, maximum power point tracking (MPPT), module integrated converters (MICs), multilevel converter, series connected converter
Procedia PDF Downloads 1271704 Analyzing the Contamination of Some Food Crops Due to Mineral Deposits in Ondo State, Nigeria
Authors: Alexander Chinyere Nwankpa, Nneka Ngozi Nwankpa
Abstract:
In Nigeria, the Federal government is trying to make sure that everyone has access to enough food that is nutritiously adequate and safe. But in the southwest of Nigeria, notably in Ondo State, the most valuable minerals such as oil and gas, bitumen, kaolin, limestone talc, columbite, tin, gold, coal, and phosphate are abundant. Therefore, some regions of Ondo State are now linked to large quantities of natural radioactivity as a result of the mineral presence. In this work, the baseline radioactivity levels in some of the most important food crops in Ondo State were analyzed, allowing for the prediction of probable radiological health impacts. To this effect, maize (Zea mays), yam (Dioscorea alata) and cassava (Manihot esculenta) tubers were collected from the farmlands in the State because they make up the majority of food's nutritional needs. Ondo State was divided into eight zones in order to provide comprehensive coverage of the research region. At room temperature, the maize (Zea mays), yam (Dioscorea alata), and cassava (Manihot esculenta) samples were dried until they reached a consistent weight. They were pulverized, homogenized, and 250 g packed in a 1-liter Marinelli beaker and kept for 28 days to achieve secular equilibrium. The activity concentrations of Radium-226 (Ra-226), Thorium-232 (Th-232), and Potassium-40 (K-40) were determined in the food samples using Gamma-ray spectrometry. Firstly, the Hyper Pure Germanium detector was calibrated using standard radioactive sources. The gamma counting, which lasted for 36000s for each sample, was carried out in the Centre for Energy Research and Development, Obafemi Awolowo University, Ile-Ife, Nigeria. The mean activity concentration of Ra-226, Th-232 and K-40 for yam were 1.91 ± 0.10 Bq/kg, 2.34 ± 0.21 Bq/kg and 48.84 ± 3.14 Bq/kg, respectively. The content of the radionuclides in maize gave a mean value of 2.83 ± 0.21 Bq/kg for Ra-226, 2.19 ± 0.07 Bq/kg for Th-232 and 41.11 ± 2.16 Bq/kg for K-40. The mean activity concentrations in cassava were 2.52 ± 0.31 Bq/kg for Ra-226, 1.94 ± 0.21 Bq/kg for Th-232 and 45.12 ± 3.31 Bq/kg for K-40. The average committed effective doses in zones 6-8 were 0.55 µSv/y for the consumption of yam, 0.39 µSv/y for maize, and 0.49 µSv/y for cassava. These values are higher than the annual dose guideline of 0.35 µSv/y for the general public. Therefore, the values obtained in this work show that there is radiological contamination of some foodstuffs consumed in some parts of Ondo State. However, we recommend that systematic and appropriate methods also need to be established for the measurement of gamma-emitting radionuclides since these constitute important contributors to the internal exposure of man through ingestion, inhalation, or wound on the body.Keywords: contamination, environment, radioactivity, radionuclides
Procedia PDF Downloads 1041703 Challenges influencing Nurse Initiated Management of Retroviral Therapy (NIMART) Implementation in Ngaka Modiri Molema District, North West Province, South Africa
Authors: Sheillah Hlamalani Mboweni, Lufuno Makhado
Abstract:
Background: The increasing number of people who tested HIV positive and who demand antiretroviral therapy (ART) prompted the National Department of Health to adopt WHO recommendations of task shifting where Professional Nurses(PNs) initiate ART rather than doctors in the hospital. This resulted in the decentralization of services to primary health care(PHC), generating a need to capacitate PNs on NIMART. After years of training, the impact of NIMART was assessed where it was established that even though there was an increased number who accessed ART, the quality of care is of serious concern. The study aims to answer the following question: What are the challenges influencing NIMART implementation in primary health care. Objectives: This study explores challenges influencing NIMART training and implementation and makes recommendations to improve patient and HIV program outcomes. Methods: A qualitative explorative program evaluation research design. The study was conducted in the rural districts of North West province. Purposive sampling was used to sample PNs trained on NIMART. FGDs were used to collect data with 6-9 participants and data was analysed using ATLAS ti. Results: Five FGDs, n=28 PNs and three program managers were interviewed. The study results revealed two themes: inadequacy in NIMART training and the health care system challenges. Conclusion: The deficiency in NIMART training and health care system challenges is a public health concern as it compromises the quality of HIV management resulting in poor patients’ outcomes and retard the goal of ending the HIV epidemic. These should be dealt with decisively by all stakeholders. Recommendations: The national department of health should improve NIMART training and HIV management: standardization of NIMART training curriculum through the involvement of all relevant stakeholders skilled facilitators, the introduction of pre-service NIMART training in institutions of higher learning, support of PNs by district and program managers, plan on how to deal with the shortage of staff, negative attitude to ensure compliance to guidelines. There is a need to develop a conceptual framework that provides guidance and strengthens NIMART implementation in PHC facilities.Keywords: antiretroviral therapy, nurse initiated management of retroviral therapy, primary health care, professional nurses
Procedia PDF Downloads 1581702 Underage Internal Migration from Rural to Urban Areas of Ethiopia: The Perspective of Social Marketing in Controlling Child Labor
Authors: Belaynesh Tefera, Ahmed Mohammed, Zelalem Bayisa
Abstract:
This study focuses on the issue of underage internal migration from rural to urban areas in Ethiopia, specifically in the context of child labor. It addresses the significant disparities in living standards between rural and urban areas, which motivate individuals from rural areas to migrate to urban areas in search of better economic opportunities. The study was conducted in Addis Ababa, where there is a high prevalence of underage internal migrants engaged in child labor due to extreme poverty in rural parts of the country. The aim of this study is to explore the life experiences of shoe-makers who have migrated from rural areas of Ethiopia to Addis Ababa. The focus is on understanding the factors that push these underage individuals to migrate, the challenges they face, and the implications for child labor. This study adopts a qualitative approach, using semistructured face-to-face interviews with underage migrants. A total of 27 interviews were conducted in Addis Ababa, Ethiopia, until the point of data saturation. The criteria for selecting interviewees include working as shoemakers and migrating to Addis Ababa underage, below 16 years old. The interviews were audio-taped, transcribed into Amharic, and then translated into English for analysis. The study reveals that the major push factors for underage internal migration are socioeconomic and environmental factors. Despite improvements in living standards for underage migrants and their families, there is a high prevalence of child labor and lack of access to education among them. Most interviewees migrated without the accompaniment of their family members and faced various challenges, including sleeping on the streets. This study highlights the role of social marketing in addressing the issues of underage internal migration and child labor. It suggests that social marketing can be an effective strategy to protect children from abuse, loneliness, and harassment during their migration process. The data collection involved conducting in-depth interviews with the underage migrants. The interviews were transcribed and translated for analysis. The analysis focused on identifying common themes and patterns within the interview data. The study addresses the factors contributing to underage internal migration, the challenges faced by underage migrants, the prevalence of child labor, and the potential role of social marketing in addressing these issues. The study concludes that although Ethiopia has policies against child internal migration, it is difficult to protect underage laborers who migrate from rural to urban areas due to the voluntary nature of their migration. The study suggests that social marketing can serve as a solution to protect children from abuse and other challenges faced during migration.Keywords: underage, internal migration, social marketing, child labor, Ethiopia
Procedia PDF Downloads 791701 Analyzing the Commentator Network Within the French YouTube Environment
Authors: Kurt Maxwell Kusterer, Sylvain Mignot, Annick Vignes
Abstract:
To our best knowledge YouTube is the largest video hosting platform in the world. A high number of creators, viewers, subscribers and commentators act in this specific eco-system which generates huge sums of money. Views, subscribers, and comments help to increase the popularity of content creators. The most popular creators are sponsored by brands and participate in marketing campaigns. For a few of them, this becomes a financially rewarding profession. This is made possible through the YouTube Partner Program, which shares revenue among creators based on their popularity. We believe that the role of comments in increasing the popularity is to be emphasized. In what follows, YouTube is considered as a bilateral network between the videos and the commentators. Analyzing a detailed data set focused on French YouTubers, we consider each comment as a link between a commentator and a video. Our research question asks what are the predominant features of a video which give it the highest probability to be commented on. Following on from this question, how can we use these features to predict the action of the agent in commenting one video instead of another, considering the characteristics of the commentators, videos, topics, channels, and recommendations. We expect to see that the videos of more popular channels generate higher viewer engagement and thus are more frequently commented. The interest lies in discovering features which have not classically been considered as markers for popularity on the platform. A quick view of our data set shows that 96% of the commentators comment only once on a certain video. Thus, we study a non-weighted bipartite network between commentators and videos built on the sub-sample of 96% of unique comments. A link exists between two nodes when a commentator makes a comment on a video. We run an Exponential Random Graph Model (ERGM) approach to evaluate which characteristics influence the probability of commenting a video. The creation of a link will be explained in terms of common video features, such as duration, quality, number of likes, number of views, etc. Our data is relevant for the period of 2020-2021 and focuses on the French YouTube environment. From this set of 391 588 videos, we extract the channels which can be monetized according to YouTube regulations (channels with at least 1000 subscribers and more than 4000 hours of viewing time during the last twelve months).In the end, we have a data set of 128 462 videos which consist of 4093 channels. Based on these videos, we have a data set of 1 032 771 unique commentators, with a mean of 2 comments per a commentator, a minimum of 1 comment each, and a maximum of 584 comments.Keywords: YouTube, social networks, economics, consumer behaviour
Procedia PDF Downloads 681700 Anthropometric Indices of Obesity and Coronary Artery Atherosclerosis: An Autopsy Study in South Indian population
Authors: Francis Nanda Prakash Monteiro, Shyna Quadras, Tanush Shetty
Abstract:
The association between human physique and morbidity and mortality resulting from coronary artery disease has been studied extensively over several decades. Multiple studies have also been done on the correlation between grade of atherosclerosis, coronary artery diseases and anthropometrical measurements. However, the number of autopsy-based studies drastically reduces this number. It has been suggested that while in living subjects, it would be expensive, difficult, and even harmful to subject them to imaging modalities like CT scans and procedures involving contrast media to study mild atherosclerosis, no such harm is encountered in study of autopsy cases. This autopsy-based study was aimed to correlate the anthropometric measurements and indices of obesity, such as waist circumference (WC), hip circumference (HC), body mass index (BMI) and waist hip ratio (WHR) with the degree of atherosclerosis in the right coronary artery (RCA), main branch of the left coronary artery (LCA) and the left anterior descending artery (LADA) in 95 South Indian origin victims of both the genders between the age of 18 years and 75 years. The grading of atherosclerosis was done according to criteria suggested by the American Heart Association. The study also analysed the correlation of the anthropometric measurements and indices of obesity with the number of coronaries affected with atherosclerosis in an individual. All the anthropometric measurements and the derived indices were found to be significantly correlated to each other in both the genders except for the age, which is found to have a significant correlation only with the WHR. In both the genders severe degree of atherosclerosis was commonly observed in LADA, followed by LCA and RCA. Grade of atherosclerosis in RCA is significantly related to the WHR in males. Grade of atherosclerosis in LCA and LADA is significantly related to the WHR in females. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in males. Significant relation was observed between grade of atherosclerosis in RCA and WC, and WHR, and between grade of atherosclerosis in LADA and HC in females. Anthropometric measurements/indices of obesity can be an effective means to identify high risk cases of atherosclerosis at an early stage that can be effective in reducing the associated cardiac morbidity and mortality. A person with anthropometric measurements suggestive of mild atherosclerosis can be advised to modify his lifestyle, along with decreasing his exposure to the other risk factors. Those with measurements suggestive of higher degree of atherosclerosis can be subjected to confirmatory procedures to start effective treatment.Keywords: atherosclerosis, coronary artery disease, indices, obesity
Procedia PDF Downloads 661699 Rapid Fetal MRI Using SSFSE, FIESTA and FSPGR Techniques
Authors: Chen-Chang Lee, Po-Chou Chen, Jo-Chi Jao, Chun-Chung Lui, Leung-Chit Tsang, Lain-Chyr Hwang
Abstract:
Fetal Magnetic Resonance Imaging (MRI) is a challenge task because the fetal movements could cause motion artifact in MR images. The remedy to overcome this problem is to use fast scanning pulse sequences. The Single-Shot Fast Spin-Echo (SSFSE) T2-weighted imaging technique is routinely performed and often used as a gold standard in clinical examinations. Fast spoiled gradient-echo (FSPGR) T1-Weighted Imaging (T1WI) is often used to identify fat, calcification and hemorrhage. Fast Imaging Employing Steady-State Acquisition (FIESTA) is commonly used to identify fetal structures as well as the heart and vessels. The contrast of FIESTA image is related to T1/T2 and is different from that of SSFSE. The advantages and disadvantages of these two scanning sequences for fetal imaging have not been clearly demonstrated yet. This study aimed to compare these three rapid MRI techniques (SSFSE, FIESTA, and FSPGR) for fetal MRI examinations. The image qualities and influencing factors among these three techniques were explored. A 1.5T GE Discovery 450 clinical MR scanner with an eight-channel high-resolution abdominal coil was used in this study. Twenty-five pregnant women were recruited to enroll fetal MRI examination with SSFSE, FIESTA and FSPGR scanning. Multi-oriented and multi-slice images were acquired. Afterwards, MR images were interpreted and scored by two senior radiologists. The results showed that both SSFSE and T2W-FIESTA can provide good image quality among these three rapid imaging techniques. Vessel signals on FIESTA images are higher than those on SSFSE images. The Specific Absorption Rate (SAR) of FIESTA is lower than that of the others two techniques, but it is prone to cause banding artifacts. FSPGR-T1WI renders lower Signal-to-Noise Ratio (SNR) because it severely suffers from the impact of maternal and fetal movements. The scan times for these three scanning sequences were 25 sec (T2W-SSFSE), 20 sec (FIESTA) and 18 sec (FSPGR). In conclusion, all these three rapid MR scanning sequences can produce high contrast and high spatial resolution images. The scan time can be shortened by incorporating parallel imaging techniques so that the motion artifacts caused by fetal movements can be reduced. Having good understanding of the characteristics of these three rapid MRI techniques is helpful for technologists to obtain reproducible fetal anatomy images with high quality for prenatal diagnosis.Keywords: fetal MRI, FIESTA, FSPGR, motion artifact, SSFSE
Procedia PDF Downloads 5301698 Learning Gains and Constraints Resulting from Haptic Sensory Feedback among Preschoolers' Engagement during Science Experimentation
Authors: Marios Papaevripidou, Yvoni Pavlou, Zacharias Zacharia
Abstract:
Embodied cognition and additional (touch) sensory channel theories indicate that physical manipulation is crucial to learning since it provides, among others, touch sensory input, which is needed for constructing knowledge. Given these theories, the use of Physical Manipulatives (PM) becomes a prerequisite for learning. On the other hand, empirical research on Virtual Manipulatives (VM) (e.g., simulations) learning has provided evidence showing that the use of PM, and thus haptic sensory input, is not always a prerequisite for learning. In order to investigate which means of experimentation, PM or VM, are required for enhancing student science learning at the kindergarten level, an empirical study was conducted that sought to investigate the impact of haptic feedback on the conceptual understanding of pre-school students (n=44, age mean=5,7) in three science domains: beam balance (D1), sinking/floating (D2) and springs (D3). The participants were equally divided in two groups according to the type of manipulatives used (PM: presence of haptic feedback, VM: absence of haptic feedback) during a semi-structured interview for each of the domains. All interviews followed the Predict-Observe-Explain (POE) strategy and consisted of three phases: initial evaluation, experimentation, final evaluation. The data collected through the interviews were analyzed qualitatively (open-coding for identifying students’ ideas in each domain) and quantitatively (use of non-parametric tests). Findings revealed that the haptic feedback enabled students to distinguish heavier to lighter objects when held in hands during experimentation. In D1 the haptic feedback did not differentiate PM and VM students' conceptual understanding of the function of the beam as a mean to compare the mass of objects. In D2 the haptic feedback appeared to have a negative impact on PM students’ learning. Feeling the weight of an object strengthen PM students’ misconception that heavier objects always sink, whereas the scientifically correct idea that the material of an object determines its sinking/floating behavior in the water was found to be significantly higher among the VM students than the PM ones. In D3 the PM students outperformed significantly the VM students with regard to the idea that the heavier an object is the more the spring will expand, indicating that the haptic input experienced by the PM students served as an advantage to their learning. These findings point to the fact that PMs, and thus touch sensory input, might not always be a requirement for science learning and that VMs could be considered, under certain circumstances, as a viable means for experimentation.Keywords: haptic feedback, physical and virtual manipulatives, pre-school science learning, science experimentation
Procedia PDF Downloads 1381697 Effect of Locally Injected Mesenchymal Stem Cells on Bone Regeneration of Rat Calvaria Defects
Authors: Gileade P. Freitas, Helena B. Lopes, Alann T. P. Souza, Paula G. F. P. Oliveira, Adriana L. G. Almeida, Paulo G. Coelho, Marcio M. Beloti, Adalberto L. Rosa
Abstract:
Bone tissue presents great capacity to regenerate when injured by trauma, infectious processes, or neoplasia. However, the extent of injury may exceed the inherent tissue regeneration capability demanding some kind of additional intervention. In this scenario, cell therapy has emerged as a promising alternative to treat challenging bone defects. This study aimed at evaluating the effect of local injection of bone marrow-derived mesenchymal stem cells (BM-MSCs) and adipose tissue-derived mesenchymal stem cells (AT-MSCs) on bone regeneration of rat calvaria defects. BM-MSCs and AT-MSCs were isolated and characterized by expression of surface markers; cell viability was evaluated after injection through a 21G needle. Defects of 5 mm in diameter were created in calvaria and after two weeks a single injection of BM-MSCs, AT-MSCs or vehicle-PBS without cells (Control) was carried out. Cells were tracked by bioluminescence and at 4 weeks post-injection bone formation was evaluated by micro-computed tomography (μCT) and histology, nanoindentation, and through gene expression of bone remodeling markers. The data were evaluated by one-way analysis of variance (p≤0.05). BM-MSCs and AT-MSCs presented characteristics of mesenchymal stem cells, kept viability after passing through a 21G needle and remained in the defects until day 14. In general, injection of both BM-MSCs and AT-MSCs resulted in higher bone formation compared to Control. Additionally, this bone tissue displayed elastic modulus and hardness similar to the pristine calvaria bone. The expression of all evaluated genes involved in bone formation was upregulated in bone tissue formed by BM-MSCs compared to AT-MSCs while genes involved in bone resorption were upregulated in AT-MSCs-formed bone. We show that cell therapy based on the local injection of BM-MSCs or AT-MSCs is effective in delivering viable cells that displayed local engraftment and induced a significant improvement in bone healing. Despite differences in the molecular cues observed between BM-MSCs and AT-MSCs, both cells were capable of forming bone tissue at comparable amounts and properties. These findings may drive cell therapy approaches toward the complete bone regeneration of challenging sites.Keywords: cell therapy, mesenchymal stem cells, bone repair, cell culture
Procedia PDF Downloads 184