Search results for: inflow performance relationship
2895 Evaluation of the Diagnostic Potential of IL-2 after Specific Antigen Stimulation with PE35 (Rv3872) and PPE68 (Rv3873) for the Discrimination of Active and Latent Tuberculosis
Authors: Shima Mahmoudi, Babak Pourakbari, Setareh Mamishi, Mostafa Teymuri, Majid Marjani
Abstract:
Although cytokine analysis has greatly contributed to the understanding of tuberculosis (TB) pathogenesis, data on cytokine profiles that might distinguish progression from latency of TB infection are scarce. Since PE/PPE proteins are known to induce strong humoral and cellular immune responses, the aim of this study was to evaluate the diagnostic potential of interleukin-2 (IL-2) as biomarker after specific antigen stimulation with PE35 and PPE68 for the discrimination of active and latent tuberculosis infection (LTBI). The production of IL-2 was measured in the antigen-stimulated whole-blood supernatants following stimulation with recombinant PE35 and PPE68. All the patients with active TB and LTBI had positive QuantiFERON-TB Gold in Tube test. The level of IL-2 following stimulation with recombinant PE35 and PPE68 were significantly higher in LTBI group than in patients with active TB infection or control group. The discrimination performance (assessed by the area under ROC curve) for IL-2 following stimulation with recombinant PE35 and PPE68 between LTBI and patients with active TB were 0.837 (95%CI: 0.72-0.97) and 0.75 (95%CI: 0.63-0.89), respectively. Applying the 12.4 pg/mL cut-off for IL-2 induced by PE35 in the present study population resulted in sensitivity of 78%, specificity of 78%, PPV of 78% and NPV of 100%. In addition, a sensitivity of 81%, specificity of 70%, PPV of 67% and 87% of NPV was reported based on the 4.4 pg/mL cut-off for IL-2 induced by PPE68. In conclusion, peptides of the antigen PE35 and PPE68, absent from commonly used BCG strains, stimulated strong IL-2- positive T cell responses in patients with LTBI. This study confirms IL-2 induced by PE35 and PPE68 as a sensitive and specific biomarker and highlights IL-2 as new promising adjunct markers for discriminating of LTBI and Active TB infection.Keywords: IL-2, PE35, PPE68, tuberculosis
Procedia PDF Downloads 4092894 The Participation of Experts in the Criminal Policy on Drugs: The Proposal of a Cannabis Regulation Model in Spain by the Cannabis Policy Studies Group
Authors: Antonio Martín-Pardo
Abstract:
With regard to the context in which this paper is inserted, it is noteworthy that the current criminal policy model in which we find immersed, denominated by some doctrine sector as the citizen security model, is characterized by a marked tendency towards the discredit of expert knowledge. This type of technic knowledge has been displaced by the common sense and by the daily experience of the people at the time of legislative drafting, as well as by excessive attention to the short-term political effects of the law. Despite this criminal-political adverse scene, we still find valuable efforts in the side of experts to bring some rationality to the legislative development. This is the case of the proposal for a new cannabis regulation model in Spain carried out by the Cannabis Policy Studies Group (hereinafter referred as ‘GEPCA’). The GEPCA is a multidisciplinary group composed by authors with multiple/different orientations, trajectories and interests, but with a common minimum objective: the conviction that the current situation regarding cannabis is unsustainable and, that a rational legislative solution must be given to the growing social pressure for the regulation of their consumption and production. This paper details the main lines through which this technical proposal is developed with the purpose of its dissemination and discussion in the Congress. The basic methodology of the proposal is inductive-expository. In that way, firstly, we will offer a brief, but solid contextualization of the situation of cannabis in Spain. This contextualization will touch on issues such as the national regulatory situation and its relationship with the international context; the criminal, judicial and penitentiary impact of the offer and consumption of cannabis, or the therapeutic use of the substance, among others. In second place, we will get down to the business properly by detailing the minutia of the three main cannabis access channels that are proposed. Namely: the regulated market, the associations of cannabis users and personal self-cultivation. In each of these options, especially in the first two, special attention will be paid to both, the production and processing of the substance and the necessary administrative control of the activity. Finally, in a third block, some notes will be given on a series of subjects that surround the different access options just mentioned above and that give fullness and coherence to the proposal outlined. Among those related issues we find some such as consumption and tenure of the substance; the issue of advertising and promotion of cannabis; consumption in areas of special risk (work or driving v. g.); the tax regime; the need to articulate evaluation instruments for the entire process; etc. The main conclusion drawn from the analysis of the proposal is the unsustainability of the current repressive system, clearly unsuccessful, and the need to develop new access routes to cannabis that guarantee both public health and the rights of people who have freely chosen to consume it.Keywords: cannabis regulation proposal, cannabis policies studies group, criminal policy, expertise participation
Procedia PDF Downloads 1192893 An Enhanced Approach in Validating Analytical Methods Using Tolerance-Based Design of Experiments (DoE)
Authors: Gule Teri
Abstract:
The effective validation of analytical methods forms a crucial component of pharmaceutical manufacturing. However, traditional validation techniques can occasionally fail to fully account for inherent variations within datasets, which may result in inconsistent outcomes. This deficiency in validation accuracy is particularly noticeable when quantifying low concentrations of active pharmaceutical ingredients (APIs), excipients, or impurities, introducing a risk to the reliability of the results and, subsequently, the safety and effectiveness of the pharmaceutical products. In response to this challenge, we introduce an enhanced, tolerance-based Design of Experiments (DoE) approach for the validation of analytical methods. This approach distinctly measures variability with reference to tolerance or design margins, enhancing the precision and trustworthiness of the results. This method provides a systematic, statistically grounded validation technique that improves the truthfulness of results. It offers an essential tool for industry professionals aiming to guarantee the accuracy of their measurements, particularly for low-concentration components. By incorporating this innovative method, pharmaceutical manufacturers can substantially advance their validation processes, subsequently improving the overall quality and safety of their products. This paper delves deeper into the development, application, and advantages of this tolerance-based DoE approach and demonstrates its effectiveness using High-Performance Liquid Chromatography (HPLC) data for verification. This paper also discusses the potential implications and future applications of this method in enhancing pharmaceutical manufacturing practices and outcomes.Keywords: tolerance-based design, design of experiments, analytical method validation, quality control, biopharmaceutical manufacturing
Procedia PDF Downloads 802892 FRATSAN: A New Software for Fractal Analysis of Signals
Authors: Hamidreza Namazi
Abstract:
Fractal analysis is assessing fractal characteristics of data. It consists of several methods to assign fractal characteristics to a dataset which may be a theoretical dataset or a pattern or signal extracted from phenomena including natural geometric objects, sound, market fluctuations, heart rates, digital images, molecular motion, networks, etc. Fractal analysis is now widely used in all areas of science. An important limitation of fractal analysis is that arriving at an empirically determined fractal dimension does not necessarily prove that a pattern is fractal; rather, other essential characteristics have to be considered. For this purpose a Visual C++ based software called FRATSAN (FRActal Time Series ANalyser) was developed which extract information from signals through three measures. These measures are Fractal Dimensions, Jeffrey’s Measure and Hurst Exponent. After computing these measures, the software plots the graphs for each measure. Besides computing three measures the software can classify whether the signal is fractal or no. In fact, the software uses a dynamic method of analysis for all the measures. A sliding window is selected with a value equal to 10% of the total number of data entries. This sliding window is moved one data entry at a time to obtain all the measures. This makes the computation very sensitive to slight changes in data, thereby giving the user an acute analysis of the data. In order to test the performance of this software a set of EEG signals was given as input and the results were computed and plotted. This software is useful not only for fundamental fractal analysis of signals but can be used for other purposes. For instance by analyzing the Hurst exponent plot of a given EEG signal in patients with epilepsy the onset of seizure can be predicted by noticing the sudden changes in the plot.Keywords: EEG signals, fractal analysis, fractal dimension, hurst exponent, Jeffrey’s measure
Procedia PDF Downloads 4672891 Development of Methods for Plastic Injection Mold Weight Reduction
Authors: Bita Mohajernia, R. J. Urbanic
Abstract:
Mold making techniques have focused on meeting the customers’ functional and process requirements; however, today, molds are increasing in size and sophistication, and are difficult to manufacture, transport, and set up due to their size and mass. Presently, mold weight saving techniques focus on pockets to reduce the mass of the mold, but the overall size is still large, which introduces costs related to the stock material purchase, processing time for process planning, machining and validation, and excess waste materials. Reducing the overall size of the mold is desirable for many reasons, but the functional requirements, tool life, and durability cannot be compromised in the process. It is proposed to use Finite Element Analysis simulation tools to model the forces, and pressures to determine where the material can be removed. The potential results of this project will reduce manufacturing costs. In this study, a light weight structure is defined by an optimal distribution of material to carry external loads. The optimization objective of this research is to determine methods to provide the optimum layout for the mold structure. The topology optimization method is utilized to improve structural stiffness while decreasing the weight using the OptiStruct software. The optimized CAD model is compared with the primary geometry of the mold from the NX software. Results of optimization show an 8% weight reduction while the actual performance of the optimized structure, validated by physical testing, is similar to the original structure.Keywords: finite element analysis, plastic injection molding, topology optimization, weight reduction
Procedia PDF Downloads 2902890 Partisan Agenda Setting in Digital Media World
Authors: Hai L. Tran
Abstract:
Previous research on agenda setting effects has often focused on the top-down influence of the media at the aggregate level, while overlooking the capacity of audience members to select media and content to fit their individual dispositions. The decentralized characteristics of online communication and digital news create more choices and greater user control, thereby enabling each audience member to seek out a unique blend of media sources, issues, and elements of messages and to mix them into a coherent individual picture of the world. This study examines how audiences use media differently depending on their prior dispositions, thereby making sense of the world in ways that are congruent with their preferences and cognitions. The current undertaking is informed by theoretical frameworks from two distinct lines of scholarship. According to the ideological migration hypothesis, individuals choose to live in communities with ideologies like their own to satisfy their need to belong. One tends to move away from Zip codes that are incongruent and toward those that are more aligned with one’s ideological orientation. This geographical division along ideological lines has been documented in social psychology research. As an extension of agenda setting, the agendamelding hypothesis argues that audiences seek out information in attractive media and blend them into a coherent narrative that fits with a common agenda shared by others, who think as they do and communicate with them about issues of public. In other words, individuals, through their media use, identify themselves with a group/community that they want to join. Accordingly, the present study hypothesizes that because ideology plays a role in pushing people toward a physical community that fits their need to belong, it also leads individuals to receive an idiosyncratic blend of media and be influenced by such selective exposure in deciding what issues are more relevant. Consequently, the individualized focus of media choices impacts how audiences perceive political news coverage and what they know about political issues. The research project utilizes recent data from The American Trends Panel survey conducted by Pew Research Center to explore the nuanced nature of agenda setting at the individual level and amid heightened polarization. Hypothesis testing is performed with both nonparametric and parametric procedures, including regression and path analysis. This research attempts to explore the media-public relationship from a bottom-up approach, considering the ability of active audience members to select among media in a larger process that entails agenda setting. It helps encourage agenda-setting scholars to further examine effects at the individual, rather than aggregate, level. In addition to theoretical contributions, the study’s findings are useful for media professionals in building and maintaining relationships with the audience considering changes in market share due to the spread of digital and social media.Keywords: agenda setting, agendamelding, audience fragmentation, ideological migration, partisanship, polarization
Procedia PDF Downloads 592889 Effect of Inclusion of Moringa oleifera Leaf on Physiological Responses of Broiler Chickens at Finisher Phase during Hot-Dry Season
Authors: Oyegunle Emmanuel Oke, A. O. Onabajo, M. O. Abioja, F. O. Sorungbe, D. E. Oyetunji, J. A. Abiona, A. O. Ladokun, O. M. Onagbesan
Abstract:
An experiment was conducted to determine the effect of different dietary inclusion levels of Moringa oleifera leaf powder (MOLP) on growth and physiological responses of broiler chickens during hot-dry season in Nigeria. Two hundred and forty (240) day-old commercial broiler chicks were randomly allotted to four dietary treatments having four replicates each. Each replicate had 15 birds. The levels of inclusion were 0g (Control group), 4g, 8g and 12g/Kg feed. The experiment lasted for eight weeks. The results of the study revealed that the initial body weight was significantly (P < 0.05) higher in birds fed 12g/kg diet than those fed 0, 4, and 8g MOLP. The birds fed 0, 4 and 8g/kg diet however had similar weights. The final body weight was significantly (P < 0.05) higher in the birds fed 12g MOLP than those fed 0, 4 and 8g MOLP. The final weights were similar in the birds fed 4 and 8g/kg diet but higher (P < 0.05) than those of the birds in the control group. The body weight gain was similar in birds fed 0 and 4g MOLP but significantly higher (P < 0.05) than those of the birds in 12g/kg diet. There were no significant differences (P > 0.05) in the feed intake. The serum albumin of the birds fed 12g MOLP/Kg diet (48.85g/L) was significantly (P < 0.05) higher than the mean value of those fed the control diet 0 and 8g MOLP/Kg diets having 36.05 and 37.10g/L respectively. Birds fed 12g MOLP/Kg feed recorded the lowest level of triglyceride (122.75g/L) which was significantly (P < 0.05) lower than those of the birds fed 0 and 4g/kg diet MOLP. The serum corticosterone decreased with increase in MOLP inclusion levels. The birds fed 12g MOLP had the least value. This study has shown that MOLP may contain potent antioxidants capable of ameliorating the effects of heat stress in broiler chickens with 12g MOLP inclusion.Keywords: physiology, performance, heat stress, anti-oxidant
Procedia PDF Downloads 3532888 Application of 2D Electrical Resistivity Tomographic Imaging Technique to Study Climate Induced Landslide and Slope Stability through the Analysis of Factor of Safety: A Case Study in Ooty Area, Tamil Nadu, India
Authors: S. Maniruzzaman, N. Ramanujam, Qazi Akhter Rasool, Swapan Kumar Biswas, P. Prasad, Chandrakanta Ojha
Abstract:
Landslide is one of the major natural disasters in South Asian countries. Applying 2D Electrical Resistivity Tomographic Imaging estimation of geometry, thickness, and depth of failure zone of the landslide can be made. Landslide is a pertinent problem in Nilgris plateau next to Himalaya. Nilgris range consists of hard Archean metamorphic rocks. Intense weathering prevailed during the Pre-Cambrian time had deformed the rocks up to 45m depth. The landslides are dominant in the southern and eastern part of plateau of is comparatively smaller than the northern drainage basins, as it has low density of drainage; coarse texture permitted the more of infiltration of rainwater, whereas in the northern part of the plateau entombed with high density of drainage pattern and fine texture with less infiltration than run off, and low to the susceptible to landslide. To get comprehensive information about the landslide zone 2D Electrical Resistivity Tomographic imaging study with CRM 500 Resistivity meter are used in Coonoor– Mettupalyam sector of Nilgiris plateau. To calculate Factor of Safety the infinite slope model of Brunsden and Prior is used. Factor of Safety can be expressed (FS) as the ratio of resisting forces to disturbing forces. If FS < 1 disturbing forces are larger than resisting forces and failure may occur. The geotechnical parameters of soil samples are calculated on the basis upon the apparent resistivity values for litho units of measured from 2D ERT image of the landslide zone. Relationship between friction angles for various soil properties is established by simple regression analysis from apparent resistivity data. Increase of water content in slide zone reduces the effectiveness of the shearing resistance and increase the sliding movement. Time-lapse resistivity changes to slope failure is determined through geophysical Factor of Safety which depends on resistivity and site topography. This ERT technique infers soil property at variable depths in wider areas. This approach to retrieve the soil property and overcomes the limit of the point of information provided by rain gauges and porous probes. Monitoring of slope stability without altering soil structure through the ERT technique is non-invasive with low cost. In landslide prone area an automated Electrical Resistivity Tomographic Imaging system should be installed permanently with electrode networks to monitor the hydraulic precursors to monitor landslide movement.Keywords: 2D ERT, landslide, safety factor, slope stability
Procedia PDF Downloads 3192887 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization
Authors: R. O. Osaseri, A. R. Usiobaifo
Abstract:
The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault
Procedia PDF Downloads 3222886 Use of Simulation in Medical Education: Role and Challenges
Authors: Raneem Osama Salem, Ayesha Nuzhat, Fatimah Nasser Al Shehri, Nasser Al Hamdan
Abstract:
Background: Recently, most medical schools around the globe are using simulation for teaching and assessing students’ clinical skills and competence. There are many obstacles that could face students and faculty when simulation sessions are introduced into undergraduate curriculum. Objective: The aim of this study is to obtain the opinion of undergraduate medical students and our faculty regarding the role of simulation in undergraduate curriculum, the simulation modalities used, and perceived barriers in implementing stimulation sessions. Methods: To address the role of simulation, modalities used, and perceived challenges to implementation of simulation sessions, a self-administered pilot tested questionnaire with 18 items using a 5 point Likert scale was distributed. Participants included undergraduate male medical students (n=125) and female students (n=70) as well as the faculty members (n=14). Result: Various learning outcomes are achieved and improved through the technology enhanced simulation sessions such as communication skills, diagnostic skills, procedural skills, self-confidence, and integration of basic and clinical sciences. The use of high fidelity simulators, simulated patients and task trainers was more desirable by our students and faculty for teaching and learning as well as an evaluation tool. According to most of the students,' institutional support in terms of resources, staff and duration of sessions was adequate. However, motivation to participate in the sessions and provision of adequate feedback by the staff was a constraint. Conclusion: The use of simulation laboratory is of great benefit to the students and a great teaching tool for the staff to ensure students learning of the various skills.Keywords: simulators, medical students, skills, simulated patients, performance, challenges, skill laboratory
Procedia PDF Downloads 4072885 Organic Co-Polymer Monolithic Columns for Liquid Chromatography Mixed Mode Protein Separations
Authors: Ahmed Alkarimi, Kevin Welham
Abstract:
Organic mixed mode monolithic columns were fabricated from; glycidyl methacrylate-co-ethylene dimethacrylate-co-stearyl methacrylate, using glycidyl methacrylate and stearyl methacrylate as co monomers representing 30% and 70% respectively of the liquid volume with ethylene dimethacrylate crosslinker and 2,2-dimethoxy-2-phenylacetophenone as the free radical initiator. The monomers were mixed with a binary porogenic solvent, comprising propan-1-ol, and methanol (0.825 mL each). The monolith was formed by photo polymerization (365 nm) inside a borosilicate glass tube (1.5 mm ID and 3 mm OD x 50 mm length). The monolith was observed to have formed correctly by optical examination and generated reasonable backpressure, approximately 650 psi at a flow rate of 0.2 mL min⁻¹ 50:50 acetonitrile: water. The morphological properties of the monolithic columns were investigated using scanning electron microscopy images, and Brunauer-Emmett-Teller analysis, the results showed that the monolith was formed properly with 19.98 ± 0.01 mm² surface area, 0.0205 ± 0.01 cm³ g⁻¹ pore volume and 6.93 ± 0.01 nm average pore size. The polymer monolith formed was further investigated using proton nuclear magnetic resonance, and Fourier transform infrared spectroscopy. The monolithic columns were investigated using high-performance liquid chromatography to test their ability to separate different samples with a range of properties. The columns displayed both hydrophobic/hydrophilic and hydrophobic/ion exchange interactions with the compounds tested indicating that true mixed mode separations. The mixed mode monolithic columns exhibited significant separation of proteins.Keywords: LC separation, proteins separation, monolithic column, mixed mode
Procedia PDF Downloads 1622884 A Biophysical Study of the Dynamic Properties of Glucagon Granules in α Cells by Imaging-Derived Mean Square Displacement and Single Particle Tracking Approaches
Authors: Samuele Ghignoli, Valentina de Lorenzi, Gianmarco Ferri, Stefano Luin, Francesco Cardarelli
Abstract:
Insulin and glucagon are the two essential hormones for maintaining proper blood glucose homeostasis, which is disrupted in Diabetes. A constantly growing research interest has been focused on the study of the subcellular structures involved in hormone secretion, namely insulin- and glucagon-containing granules, and on the mechanisms regulating their behaviour. Yet, while several successful attempts were reported describing the dynamic properties of insulin granules, little is known about their counterparts in α cells, the glucagon-containing granules. To fill this gap, we used αTC1 clone 9 cells as a model of α cells and ZIGIR as a fluorescent Zinc chelator for granule labelling. We started by using spatiotemporal fluorescence correlation spectroscopy in the form of imaging-derived mean square displacement (iMSD) analysis. This afforded quantitative information on the average dynamical and structural properties of glucagon granules having insulin granules as a benchmark. Interestingly, the iMSD sensitivity to average granule size allowed us to confirm that glucagon granules are smaller than insulin ones (~1.4 folds, further validated by STORM imaging). To investigate possible heterogeneities in granule dynamic properties, we moved from correlation spectroscopy to single particle tracking (SPT). We developed a MATLAB script to localize and track single granules with high spatial resolution. This enabled us to classify the glucagon granules, based on their dynamic properties, as ‘blocked’ (i.e., trajectories corresponding to immobile granules), ‘confined/diffusive’ (i.e., trajectories corresponding to slowly moving granules in a defined region of the cell), or ‘drifted’ (i.e., trajectories corresponding to fast-moving granules). In cell-culturing control conditions, results show this average distribution: 32.9 ± 9.3% blocked, 59.6 ± 9.3% conf/diff, and 7.4 ± 3.2% drifted. This benchmarking provided us with a foundation for investigating selected experimental conditions of interest, such as the glucagon-granule relationship with the cytoskeleton. For instance, if Nocodazole (10 μM) is used for microtubule depolymerization, the percentage of drifted motion collapses to 3.5 ± 1.7% while immobile granules increase to 56.0 ± 10.7% (remaining 40.4 ± 10.2% of conf/diff). This result confirms the clear link between glucagon-granule motion and cytoskeleton structures, a first step towards understanding the intracellular behaviour of this subcellular compartment. The information collected might now serve to support future investigations on glucagon granules in physiology and disease. Acknowledgment: This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 866127, project CAPTUR3D).Keywords: glucagon granules, single particle tracking, correlation spectroscopy, ZIGIR
Procedia PDF Downloads 1092883 Pilot-free Image Transmission System of Joint Source Channel Based on Multi-Level Semantic Information
Authors: Linyu Wang, Liguo Qiao, Jianhong Xiang, Hao Xu
Abstract:
In semantic communication, the existing joint Source Channel coding (JSCC) wireless communication system without pilot has unstable transmission performance and can not effectively capture the global information and location information of images. In this paper, a pilot-free image transmission system of joint source channel based on multi-level semantic information (Multi-level JSCC) is proposed. The transmitter of the system is composed of two networks. The feature extraction network is used to extract the high-level semantic features of the image, compress the information transmitted by the image, and improve the bandwidth utilization. Feature retention network is used to preserve low-level semantic features and image details to improve communication quality. The receiver also is composed of two networks. The received high-level semantic features are fused with the low-level semantic features after feature enhancement network in the same dimension, and then the image dimension is restored through feature recovery network, and the image location information is effectively used for image reconstruction. This paper verifies that the proposed multi-level JSCC algorithm can effectively transmit and recover image information in both AWGN channel and Rayleigh fading channel, and the peak signal-to-noise ratio (PSNR) is improved by 1~2dB compared with other algorithms under the same simulation conditions.Keywords: deep learning, JSCC, pilot-free picture transmission, multilevel semantic information, robustness
Procedia PDF Downloads 1202882 Technical and Economic Analysis of Smart Micro-Grid Renewable Energy Systems: An Applicable Case Study
Authors: M. A. Fouad, M. A. Badr, Z. S. Abd El-Rehim, Taher Halawa, Mahmoud Bayoumi, M. M. Ibrahim
Abstract:
Renewable energy-based micro-grids are presently attracting significant consideration. The smart grid system is presently considered a reliable solution for the expected deficiency in the power required from future power systems. The purpose of this study is to determine the optimal components sizes of a micro-grid, investigating technical and economic performance with the environmental impacts. The micro grid load is divided into two small factories with electricity, both on-grid and off-grid modes are considered. The micro-grid includes photovoltaic cells, back-up diesel generator wind turbines, and battery bank. The estimated load pattern is 76 kW peak. The system is modeled and simulated by MATLAB/Simulink tool to identify the technical issues based on renewable power generation units. To evaluate system economy, two criteria are used: the net present cost and the cost of generated electricity. The most feasible system components for the selected application are obtained, based on required parameters, using HOMER simulation package. The results showed that a Wind/Photovoltaic (W/PV) on-grid system is more economical than a Wind/Photovoltaic/Diesel/Battery (W/PV/D/B) off-grid system as the cost of generated electricity (COE) is 0.266 $/kWh and 0.316 $/kWh, respectively. Considering the cost of carbon dioxide emissions, the off-grid will be competitive to the on-grid system as COE is found to be (0.256 $/kWh, 0.266 $/kWh), for on and off grid systems.Keywords: renewable energy sources, micro-grid system, modeling and simulation, on/off grid system, environmental impacts
Procedia PDF Downloads 2702881 The Development of a Low Carbon Cementitious Material Produced from Cement, Ground Granulated Blast Furnace Slag and High Calcium Fly Ash
Authors: Ali Shubbar, Hassnen M. Jafer, Anmar Dulaimi, William Atherton, Ali Al-Rifaie
Abstract:
This research represents experimental work for investigation of the influence of utilising Ground Granulated Blast Furnace Slag (GGBS) and High Calcium Fly Ash (HCFA) as a partial replacement for Ordinary Portland Cement (OPC) and produce a low carbon cementitious material with comparable compressive strength to OPC. Firstly, GGBS was used as a partial replacement to OPC to produce a binary blended cementitious material (BBCM); the replacements were 0, 10, 15, 20, 25, 30, 35, 40, 45 and 50% by the dry mass of OPC. The optimum BBCM was mixed with HCFA to produce a ternary blended cementitious material (TBCM). The replacements were 0, 10, 15, 20, 25, 30, 35, 40, 45 and 50% by the dry mass of BBCM. The compressive strength at ages of 7 and 28 days was utilised for assessing the performance of the test specimens in comparison to the reference mixture using 100% OPC as a binder. The results showed that the optimum BBCM was the mix produced from 25% GGBS and 75% OPC with compressive strength of 32.2 MPa at the age of 28 days. In addition, the results of the TBCM have shown that the addition of 10, 15, 20 and 25% of HCFA to the optimum BBCM improved the compressive strength by 22.7, 11.3, 5.2 and 2.1% respectively at 28 days. However, the replacement of optimum BBCM with more than 25% HCFA have showed a gradual drop in the compressive strength in comparison to the control mix. TBCM with 25% HCFA was considered to be the optimum as it showed better compressive strength than the control mix and at the same time reduced the amount of cement to 56%. Reducing the cement content to 56% will contribute to decrease the cost of construction materials, provide better compressive strength and also reduce the CO2 emissions into the atmosphere.Keywords: cementitious material, compressive strength, GGBS, HCFA, OPC
Procedia PDF Downloads 1942880 Optimal Design of Storm Water Networks Using Simulation-Optimization Technique
Authors: Dibakar Chakrabarty, Mebada Suiting
Abstract:
Rapid urbanization coupled with changes in land use pattern results in increasing peak discharge and shortening of catchment time of concentration. The consequence is floods, which often inundate roads and inhabited areas of cities and towns. Management of storm water resulting from rainfall has, therefore, become an important issue for the municipal bodies. Proper management of storm water obviously includes adequate design of storm water drainage networks. The design of storm water network is a costly exercise. Least cost design of storm water networks assumes significance, particularly when the fund available is limited. Optimal design of a storm water system is a difficult task as it involves the design of various components, like, open or closed conduits, storage units, pumps etc. In this paper, a methodology for least cost design of storm water drainage systems is proposed. The methodology proposed in this study consists of coupling a storm water simulator with an optimization method. The simulator used in this study is EPA’s storm water management model (SWMM), which is linked with Genetic Algorithm (GA) optimization method. The model proposed here is a mixed integer nonlinear optimization formulation, which takes care of minimizing the sectional areas of the open conduits of storm water networks, while satisfactorily conveying the runoff resulting from rainfall to the network outlet. Performance evaluations of the developed model show that the proposed method can be used for cost effective design of open conduit based storm water networks.Keywords: genetic algorithm (GA), optimal design, simulation-optimization, storm water network, SWMM
Procedia PDF Downloads 2482879 Micro-Ribonucleic Acid-21 as High Potential Prostate Cancer Biomarker
Authors: Regina R. Gunawan, Indwiani Astuti, H. Raden Danarto
Abstract:
Cancer is the leading cause of death worldwide. Cancer is caused by mutations that alter the function of normal human genes and give rise to cancer genes. MicroRNA (miRNA) is a small non-coding RNA that regulates the gen through complementary bond towards mRNA target and cause mRNA degradation. miRNA works by either promoting or suppressing cell proliferation. miRNA level expression in cancer may offer another value of miRNA as a biomarker in cancer diagnostic. miRNA-21 is believed to have a role in carcinogenesis by enhancing proliferation, anti-apoptosis, cell cycle progression and invasion of tumor cells. Hsa-miR-21-5p marker has been identified in Prostate Cancer (PCa) and Benign Prostatic Hyperplasia (BPH) patient’s urine. This research planned to explore the diagnostic performance of miR-21 to differentiate PCa and BPH patients. In this study, urine samples were collected from 20 PCa patients and 20 BPH patients. miR-21 relative expression against the reference gene was analyzed and compared between the two. miRNA expression was analyzed using the comparative quantification method to find the fold change. miR-21 validity in identifying PCa patients was performed by quantifying the sensitivity and specificity with the contingency table. miR-21 relative expression against miR-16 in PCa patient and in BPH patient has 12,98 differences in fold change. From a contingency table of Cq expression of miR-21 in identifying PCa patients from BPH patient, Cq miR-21 has 100% sensitivity and 75% specificity. miR-21 relative expression can be used in discriminating PCa from BPH by using a urine sample. Furthermore, the expression of miR-21 has higher sensitivity compared to PSA (Prostate specific antigen), therefore miR-21 has a high potential to be analyzed and developed more.Keywords: benign prostate hyperplasia, biomarker, miRNA-21, prostate cancer
Procedia PDF Downloads 1592878 The Improvement of Turbulent Heat Flux Parameterizations in Tropical GCMs Simulations Using Low Wind Speed Excess Resistance Parameter
Authors: M. O. Adeniyi, R. T. Akinnubi
Abstract:
The parameterization of turbulent heat fluxes is needed for modeling land-atmosphere interactions in Global Climate Models (GCMs). However, current GCMs still have difficulties with producing reliable turbulent heat fluxes for humid tropical regions, which may be due to inadequate parameterization of the roughness lengths for momentum (z0m) and heat (z0h) transfer. These roughness lengths are usually expressed in term of excess resistance factor (κB^(-1)), and this factor is used to account for different resistances for momentum and heat transfers. In this paper, a more appropriate excess resistance factor (〖 κB〗^(-1)) suitable for low wind speed condition was developed and incorporated into the aerodynamic resistance approach (ARA) in the GCMs. Also, the performance of various standard GCMs κB^(-1) schemes developed for high wind speed conditions were assessed. Based on the in-situ surface heat fluxes and profile measurements of wind speed and temperature from Nigeria Micrometeorological Experimental site (NIMEX), new κB^(-1) was derived through application of the Monin–Obukhov similarity theory and Brutsaert theoretical model for heat transfer. Turbulent flux parameterizations with this new formula provides better estimates of heat fluxes when compared with others estimated using existing GCMs κB^(-1) schemes. The derived κB^(-1) MBE and RMSE in the parameterized QH ranged from -1.15 to – 5.10 Wm-2 and 10.01 to 23.47 Wm-2, while that of QE ranged from - 8.02 to 6.11 Wm-2 and 14.01 to 18.11 Wm-2 respectively. The derived 〖 κB〗^(-1) gave better estimates of QH than QE during daytime. The derived 〖 κB〗^(-1)=6.66〖 Re〗_*^0.02-5.47, where Re_* is the Reynolds number. The derived κB^(-1) scheme which corrects a well documented large overestimation of turbulent heat fluxes is therefore, recommended for most regional models within the tropic where low wind speed is prevalent.Keywords: humid, tropic, excess resistance factor, overestimation, turbulent heat fluxes
Procedia PDF Downloads 2022877 DLtrace: Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps
Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li
Abstract:
With the widespread popularity of mobile devices and the development of artificial intelligence (AI), deep learning (DL) has been extensively applied in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace; a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Moreover, using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. We conducted an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace has a more robust performance than FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.Keywords: mobile computing, deep learning apps, sensitive information, static analysis
Procedia PDF Downloads 1792876 Rehabilitation Approach for Cancer Patients: Indication, Management and Outcome
Authors: Juliani Rianto, Emma Lumby, Tracey Smith
Abstract:
Cancer patients’ survival are growing with the new approach and therapy in oncology medicine. Cancer is now a new chronic disease, and rehabilitation program has become an ongoing program as part of cancer care. The focus of Cancer rehabilitation is maximising person’s physical and emotional function, stabilising general health and reducing unnecessary hospital admission. In Australia there are 150000 newly diagnosed cancer every year, and the most common Cancer are prostate, Breast, Colorectal, Melanoma and Lung Cancer. Through referral from the oncology team, we recruited cancer patient into our cancer rehabilitation program. Patients are assessed by our multi-disciplinary team including rehabilitation specialist, physiotherapist, occupational therapist, dietician, exercise physiologist, and psychologist. Specific issues are identified, including pain, side effect of chemo and radiation therapy and mental well-being. The goals were identified and reassessed every fortnight. Common goals including nutritional status, improve endurance and exercise performance, working on balance and mobility, improving emotional and vocational state, educational program for insomnia and tiredness, and reducing hospitalisation are identified and assessed. Patients are given 2 hours exercise program twice a week for 6 weeks with focus on aerobic and weight exercises and education sessions. Patients are generally benefited from the program. The quality of life is improved, support and interaction from the therapist has played an important factor in directing patient for their goals.Keywords: cancer, exercises, benefit, mental health
Procedia PDF Downloads 602875 Relationship between Iron-Related Parameters and Soluble Tumor Necrosis Factor-Like Weak Inducer of Apoptosis in Obese Children
Authors: Mustafa M. Donma, Orkide Donma
Abstract:
Iron is physiologically essential. However, it also participates in the catalysis of free radical formation reactions. Its deficiency is associated with amplified health risks. This trace element establishes some links with another physiological process related to cell death, apoptosis. Both iron deficiency and iron overload are closely associated with apoptosis. Soluble tumor necrosis factor-like weak inducer of apoptosis (sTWEAK) has the ability to trigger apoptosis and plays a dual role in the physiological versus pathological inflammatory responses of tissues. The aim of this study was to investigate the status of these parameters as well as the associations among them in children with obesity, a low-grade inflammatory state. The study was performed on groups of children with normal body mass index (N-BMI) and obesity. Forty-three children were included in each group. Based upon age- and sex-adjusted BMI percentile tables prepared by World Health Organization, children whose values varied between 85 and 15 were included in N-BMI group. Children whose BMI percentile values were between 99 and 95 comprised obese (OB) group. Institutional ethical committee approval and informed consent forms were taken prior to the study. Anthropometric measurements (weight, height, waist circumference, hip circumference, head circumference, neck circumference) and blood pressure values (systolic blood pressure and diastolic blood pressure) were recorded. Routine biochemical analysis including serum iron, total iron binding capacity (TIBC), transferrin saturation percent (Tf Sat %), and ferritin were performed. Soluble tumor necrosis factor-like weak inducer of apoptosis levels were determined by enzyme-linked immunosorbent assay. Study data was evaluated using appropriate statistical tests performed by the statistical program SPSS. Serum iron levels were 91±34 mcrg/dl and 75±31 mcrg/dl in N-BMI and OB children, respectively. The corresponding values for TIBC, Tf Sat %, ferritin were 265 mcrg/dl vs 299 mcrg/dl, 37.2±19.1 % vs 26.7±14.6 %, and 41±25 ng/ml vs 44±26 ng/ml. in N-BMI and OB groups, sTWEAK concentrations were measured as 351 ng/L and 325 ng/L, respectively (p>0.05). Correlation analysis revealed significant associations between sTWEAK levels and iron related parameters (p<0.05) except ferritin. In conclusion, iron contributes to apoptosis. Children with iron deficiency have decreased apoptosis rate in comparison with that of healthy children. sTWEAK is inducer of apoptosis. Obese children had lower levels of both iron and sTWEAK. Low levels of sTWEAK are associated with several types of cancers and poor survival. Although iron deficiency state was not observed in this study, the correlations detected between decreased sTWEAK and decreased iron as well as Tf Sat % values were valuable findings, which point out decreased apoptosis. This may induce a proinflammatory state, potentially leading to malignancies in the future lives of obese children.Keywords: apoptosis, children, iron-related parameters, obesity, soluble tumor necrosis factor-like weak inducer of apoptosis
Procedia PDF Downloads 1322874 The Impact of Surface Roughness and PTFE/TiF3/FeF3 Additives in Plain ZDDP Oil on the Friction and Wear Behavior Using Thermal and Tribological Analysis under Extreme Pressure Condition
Authors: Gabi N. Nehme, Saeed Ghalambor
Abstract:
The use of titanium fluoride and iron fluoride (TiF3/FeF3) catalysts in combination with polutetrafluoroethylene (PTFE) in plain zinc dialkyldithiophosphate (ZDDP) oil is important for the study of engine tribocomponents and is increasingly a strategy to improve the formation of tribofilm and to provide low friction and excellent wear protection in reduced phosphorus plain ZDDP oil. The influence of surface roughness and the concentration of TiF3/FeF3/PTFE were investigated using bearing steel samples dipped in lubricant solution @100°C for two different heating time durations. This paper addresses the effects of water drop contact angle using different surface finishes after treating them with different lubricant combination. The calculated water drop contact angles were analyzed using Design of Experiment software (DOE) and it was determined that a 0.05 μm Ra surface roughness would provide an excellent TiF3/FeF3/PTFE coating for antiwear resistance as reflected in the scanning electron microscopy (SEM) images and the tribological testing under extreme pressure conditions. Both friction and wear performance depend greatly on the PTFE/and catalysts in plain ZDDP oil with 0.05% phosphorous and on the surface finish of bearing steel. The friction and wear reducing effects, which was observed in the tribological tests, indicated a better micro lubrication effect of the 0.05 μm Ra surface roughness treated at 100°C for 24 hours when compared to the 0.1 μm Ra surface roughness with the same treatment.Keywords: scanning electron microscopy, ZDDP, catalysts, PTFE, friction, wear
Procedia PDF Downloads 3502873 Al-Ti-W Metallic Glass Thin Films Deposited by Magnetron Sputtering Technology to Protect Steel Against Hydrogen Embrittlement
Authors: Issam Lakdhar, Akram Alhussein, Juan Creus
Abstract:
With the huge increase in world energy consumption, researchers are working to find other alternative sources of energy instead of fossil fuel one causing many environmental problems as the production of greenhouse effect gases. Hydrogen is considered a green energy source, which its combustion does not cause environmental pollution. The transport and the storage of the gas molecules or the other products containing this smallest chemical element in metallic structures (pipelines, tanks) are crucial issues. The dissolve and the permeation of hydrogen into the metal lattice lead to the formation of hydride phases and the embrittlement of structures. To protect the metallic structures, a surface treatment could be a good solution. Among the different techniques, magnetron sputtering is used to elaborate micrometric coatings capable of slowing down or stop hydrogen permeation. In the plasma environment, the deposition parameters of new thin-film metallic glasses Al-Ti-W were optimized and controlled in order to obtain, hydrogen barrier. Many characterizations were carried out (SEM, XRD and Nano-indentation…) to control the composition and understand the influence of film microstructure and chemical composition on the hydrogen permeation through the coatings. The coating performance was evaluated under two hydrogen production methods: chemical and electrochemical (cathodic protection) techniques. The hydrogen quantity absorbed was experimentally determined using the Thermal-Desorption Spectroscopy method (TDS)). An ideal ATW thin film was developed and showed excellent behavior against the diffusion of hydrogen.Keywords: thin films, hydrogen, PVD, plasma technology, electrochemical properties
Procedia PDF Downloads 1852872 The Impact of a Sustainable Solar Heating System on the Growth of Strawberry Plants in an Agricultural Greenhouse
Authors: Ilham Ihoume, Rachid Tadili, Nora Arbaoui
Abstract:
The use of solar energy is a crucial tactic in the agricultural industry's plan to decrease greenhouse gas emissions. This clean source of energy can greatly lower the sector's carbon footprint and make a significant impact in the fight against climate change. In this regard, this study examines the effects of a solar-based heating system, in a north-south oriented agricultural greenhouse on the development of strawberry plants during winter. This system relies on the circulation of water as a heat transfer fluid in a closed circuit installed on the greenhouse roof to store heat during the day and release it inside at night. A comparative experimental study was conducted in two greenhouses, one experimental with the solar heating system and the other for control without any heating system. Both greenhouses are located on the terrace of the Solar Energy and Environment Laboratory of the Mohammed V University in Rabat, Morocco. The developed heating system consists of a copper coil inserted in double glazing and placed on the roof of the greenhouse, a water pump circulator, a battery, and a photovoltaic solar panel to power the electrical components. This inexpensive and environmentally friendly system allows the greenhouse to be heated during the winter and improves its microclimate system. This improvement resulted in an increase in the air temperature inside the experimental greenhouse by 6 °C and 8 °C, and a reduction in its relative humidity by 23% and 35% compared to the control greenhouse and the ambient air, respectively, throughout the winter. For the agronomic performance, it was observed that the production was 17 days earlier than in the control greenhouse.Keywords: sustainability, thermal energy storage, solar energy, agriculture greenhouse
Procedia PDF Downloads 872871 Identification of Factors and Impacts on the Success of Implementing Extended Enterprise Resource Planning: Case Study of Manufacturing Industries in East Java, Indonesia
Authors: Zeplin Jiwa Husada Tarigan, Sautma Ronni Basana, Widjojo Suprapto
Abstract:
The ERP is integrating all data from various departments within the company into one data base. One department inputs the data and many other departments can access and use the data through the connected information system. As many manufacturing companies in Indonesia implement the ERP technology, many adjustments are to be made to align with the business process in the companies, especially the management policy and the competitive advantages. For companies that are successful in the initial implementation, they still have to maintain the process so that the initial success can develop along with the changing of business processes of the company. For companies which have already implemented the ERP successfully, they are still in need to maintain the system so that it can match up with the business development and changes. The continued success of the extended ERP implementation aims to achieve efficient and effective performance for the company. This research is distributing 100 questionnaires to manufacturing companies in East Java, Indonesia, which have implemented and have going live ERP for over five years. There are 90 returned questionnaires with ten disqualified questionnaires because they are from companies that implement ERP less than five years. There are only 80 questionnaires used as the data, with the response rate of 80%. Based on the data results and analysis with PLS (Partial Least Square), it is obtained that the organization commitment brings impacts to the user’s effectiveness and provides the adequate IT infrastructure. The user’s effectiveness brings impacts to the adequate IT infrastructure. The information quality of the company increases the implementation of the extended ERP in manufacturing companies in East Java, Indonesia.Keywords: organization commitment, adequate IT infrastructure, information quality, extended ERP implementation
Procedia PDF Downloads 1682870 Performance Evaluation and Comparison between the Empirical Mode Decomposition, Wavelet Analysis, and Singular Spectrum Analysis Applied to the Time Series Analysis in Atmospheric Science
Authors: Olivier Delage, Hassan Bencherif, Alain Bourdier
Abstract:
Signal decomposition approaches represent an important step in time series analysis, providing useful knowledge and insight into the data and underlying dynamics characteristics while also facilitating tasks such as noise removal and feature extraction. As most of observational time series are nonlinear and nonstationary, resulting of several physical processes interaction at different time scales, experimental time series have fluctuations at all time scales and requires the development of specific signal decomposition techniques. Most commonly used techniques are data driven, enabling to obtain well-behaved signal components without making any prior-assumptions on input data. Among the most popular time series decomposition techniques, most cited in the literature, are the empirical mode decomposition and its variants, the empirical wavelet transform and singular spectrum analysis. With increasing popularity and utility of these methods in wide ranging applications, it is imperative to gain a good understanding and insight into the operation of these algorithms. In this work, we describe all of the techniques mentioned above as well as their ability to denoise signals, to capture trends, to identify components corresponding to the physical processes involved in the evolution of the observed system and deduce the dimensionality of the underlying dynamics. Results obtained with all of these methods on experimental total ozone columns and rainfall time series will be discussed and comparedKeywords: denoising, empirical mode decomposition, singular spectrum analysis, time series, underlying dynamics, wavelet analysis
Procedia PDF Downloads 1182869 (Re)connecting to the Spirit of the Language: Decolonizing from Eurocentric Indigenous Language Revitalization Methodologies
Authors: Lana Whiskeyjack, Kyle Napier
Abstract:
The Spirit of the language embodies the motivation for indigenous people to connect with the indigenous language of their lineage. While the concept of the spirit of the language is often woven into the discussion by indigenous language revitalizationists, particularly those who are indigenous, there are few tangible terms in academic research conceptually actualizing the term. Through collaborative work with indigenous language speakers, elders, and learners, this research sets out to identify the spirit of the language, the catalysts of disconnection from the spirit of the language, and the sources of reconnection to the spirit of the language. This work fundamentally addresses the terms of engagement around collaboration with indigenous communities, itself inviting a decolonial approach to community outreach and individual relationships. As indigenous researchers, this means beginning, maintain, and closing this work in the ceremony while being transparent with community members in this work and related publishing throughout the project’s duration. Decolonizing this approach also requires maintaining explicit ongoing consent by the elders, knowledge keepers, and community members when handling their ancestral and indigenous knowledge. The handling of this knowledge is regarded in this work as stewardship, both in the handling of digital materials and the handling of ancestral Indigenous knowledge. This work observes recorded conversations in both nêhiyawêwin and English, resulting from 10 semi-structured interviews with fluent nêhiyawêwin speakers as well as three structured dialogue circles with fluent and emerging speakers. The words were transcribed by a speaker fluent in both nêhiyawêwin and English. The results of those interviews were categorized thematically to conceptually actualize the spirit of the language, catalysts of disconnection to thespirit of the language, and community voices methods of reconnection to the spirit of the language. Results of these interviews vastly determine that the spirit of the language is drawn from the land. Although nêhiyawêwin is the focus of this work, Indigenous languages are by nature inherently related to the land. This is further reaffirmed by the Indigenous language learners and speakers who expressed having ancestries and lineages from multiple Indigenous communities. Several other key differences embody this spirit of the language, which include ceremony and spirituality, as well as the semantic worldviews tied to polysynthetic verb-oriented morphophonemics most often found in indigenous languages — and of focus, nêhiyawêwin. The catalysts of disconnection to the spirit of the language are those whose histories have severed connections between Indigenous Peoples and the spirit of their languages or those that have affected relationships with the land, ceremony, and ways of thinking. Results of this research and its literature review have determined the three most ubiquitously damaging interdependent factors, which are catalysts of disconnection from the spirit of the language as colonization, capitalism, and Christianity. As voiced by the Indigenous language learners, this work necessitates addressing means to reconnect to the spirit of the language. Interviewees mentioned that the process of reconnection involves a whole relationship with the land, the practice of reciprocal-relational methodologies for language learning, and indigenous-protected and -governed learning. This work concludes in support of those reconnection methodologies.Keywords: indigenous language acquisition, indigenous language reclamation, indigenous language revitalization, nêhiyawêwin, spirit of the language
Procedia PDF Downloads 1432868 Quantification of Hydrogen Sulfide and Methyl Mercaptan in Air Samples from a Waste Management Facilities
Authors: R. F. Vieira, S. A. Figueiredo, O. M. Freitas, V. F. Domingues, C. Delerue-Matos
Abstract:
The presence of sulphur compounds like hydrogen sulphide and mercaptans is one of the reasons for waste-water treatment and waste management being associated with odour emissions. In this context having a quantifying method for these compounds helps in the optimization of treatment with the goal of their elimination, namely biofiltration processes. The aim of this study was the development of a method for quantification of odorous gases in waste treatment plants air samples. A method based on head space solid phase microextraction (HS-SPME) coupled with gas chromatography - flame photometric detector (GC-FPD) was used to analyse H2S and Metil Mercaptan (MM). The extraction was carried out with a 75-μm Carboxen-polydimethylsiloxane fiber coating at 22 ºC for 20 min, and analysed by a GC 2010 Plus A from Shimadzu with a sulphur filter detector: splitless mode (0.3 min), the column temperature program was from 60 ºC, increased by 15 ºC/min to 100 ºC (2 min). The injector temperature was held at 250 ºC, and the detector at 260 ºC. For calibration curve a gas diluter equipment (digital Hovagas G2 - Multi Component Gas Mixer) was used to do the standards. This unit had two input connections, one for a stream of the dilute gas and another for a stream of nitrogen and an output connected to a glass bulb. A 40 ppm H2S and a 50 ppm MM cylinders were used. The equipment was programmed to the selected concentration, and it automatically carried out the dilution to the glass bulb. The mixture was left flowing through the glass bulb for 5 min and then the extremities were closed. This method allowed the calibration between 1-20 ppm for H2S and 0.02-0.1 ppm and 1-3.5 ppm for MM. Several quantifications of air samples from inlet and outlet of a biofilter operating in a waste management facility in the north of Portugal allowed the evaluation the biofilters performance.Keywords: biofiltration, hydrogen sulphide, mercaptans, quantification
Procedia PDF Downloads 4762867 Analyzing the Effect of Materials’ Selection on Energy Saving and Carbon Footprint: A Case Study Simulation of Concrete Structure Building
Authors: M. Kouhirostamkolaei, M. Kouhirostami, M. Sam, J. Woo, A. T. Asutosh, J. Li, C. Kibert
Abstract:
Construction is one of the most energy consumed activities in the urban environment that results in a significant amount of greenhouse gas emissions around the world. Thus, the impact of the construction industry on global warming is undeniable. Thus, reducing building energy consumption and mitigating carbon production can slow the rate of global warming. The purpose of this study is to determine the amount of energy consumption and carbon dioxide production during the operation phase and the impact of using new shells on energy saving and carbon footprint. Therefore, a residential building with a re-enforced concrete structure is selected in Babolsar, Iran. DesignBuilder software has been used for one year of building operation to calculate the amount of carbon dioxide production and energy consumption in the operation phase of the building. The primary results show the building use 61750 kWh of energy each year. Computer simulation analyzes the effect of changing building shells -using XPS polystyrene and new electrochromic windows- as well as changing the type of lighting on energy consumption reduction and subsequent carbon dioxide production. The results show that the amount of energy and carbon production during building operation has been reduced by approximately 70% by applying the proposed changes. The changes reduce CO2e to 11345 kg CO2/yr. The result of this study helps designers and engineers to consider material selection’s process as one of the most important stages of design for improving energy performance of buildings.Keywords: construction materials, green construction, energy simulation, carbon footprint, energy saving, concrete structure, designbuilder
Procedia PDF Downloads 1982866 CFD Simulation Approach for Developing New Powder Dispensing Device
Authors: Revanth Rallapalli
Abstract:
Manually dispensing powders can be difficult as it requires gradually pouring and checking the amount on the scale to be dispensed. Current systems are manual and non-continuous in nature and are user-dependent and difficult to control powder dispensation. Recurrent dosing of powdered medicines in precise amounts quickly and accurately has been an all-time challenge. Various new powder dispensing mechanisms are being designed to overcome these challenges. A battery-operated screw conveyor mechanism is being innovated to overcome the above problems faced. These inventions are numerically evaluated at the concept development level by employing Computational Fluid Dynamics (CFD) of gas-solids multiphase flow systems. CFD has been very helpful in the development of such devices saving time and money by reducing the number of prototypes and testing. This paper describes a simulation of powder dispensation from the trocar’s end by considering the powder as secondary flow in the air, is simulated by using the technique called Dense Discrete Phase Model incorporated with Kinetic Theory of Granular Flow (DDPM-KTGF). By considering the volume fraction of powder as 50%, the transportation of powder from the inlet side to the trocar’s end side is done by rotation of the screw conveyor. The performance is calculated for a 1-sec time frame in an unsteady computation manner. This methodology will help designers in developing design concepts to improve the dispensation and the effective area within a quick turnaround time frame.Keywords: multiphase flow, screw conveyor, transient, dense discrete phase model (DDPM), kinetic theory of granular flow (KTGF)
Procedia PDF Downloads 146