Search results for: energy simulation
1267 God, The Master Programmer: The Relationship Between God and Computers
Authors: Mohammad Sabbagh
Abstract:
Anyone who reads the Torah or the Quran learns that GOD created everything that is around us, seen and unseen, in six days. Within HIS plan of creation, HE placed for us a key proof of HIS existence which is essentially computers and the ability to program them. Digital computer programming began with binary instructions, which eventually evolved to what is known as high-level programming languages. Any programmer in our modern time can attest that you are essentially giving the computer commands by words and when the program is compiled, whatever is processed as output is limited to what the computer was given as an ability and furthermore as an instruction. So one can deduce that GOD created everything around us with HIS words, programming everything around in six days, just like how we can program a virtual world on the computer. GOD did mention in the Quran that one day where GOD’s throne is, is 1000 years of what we count; therefore, one might understand that GOD spoke non-stop for 6000 years of what we count, and gave everything it’s the function, attributes, class, methods and interactions. Similar to what we do in object-oriented programming. Of course, GOD has the higher example, and what HE created is much more than OOP. So when GOD said that everything is already predetermined, it is because any input, whether physical, spiritual or by thought, is outputted by any of HIS creatures, the answer has already been programmed. Any path, any thought, any idea has already been laid out with a reaction to any decision an inputter makes. Exalted is GOD!. GOD refers to HIMSELF as The Fastest Accountant in The Quran; the Arabic word that was used is close to processor or calculator. If you create a 3D simulation of a supernova explosion to understand how GOD produces certain elements and fuses protons together to spread more of HIS blessings around HIS skies; in 2022 you are going to require one of the strongest, fastest, most capable supercomputers of the world that has a theoretical speed of 50 petaFLOPS to accomplish that. In other words, the ability to perform one quadrillion (1015) floating-point operations per second. A number a human cannot even fathom. To put in more of a perspective, GOD is calculating when the computer is going through those 50 petaFLOPS calculations per second and HE is also calculating all the physics of every atom and what is smaller than that in all the actual explosion, and it’s all in truth. When GOD said HE created the world in truth, one of the meanings a person can understand is that when certain things occur around you, whether how a car crashes or how a tree grows; there is a science and a way to understand it, and whatever programming or science you deduce from whatever event you observed, it can relate to other similar events. That is why GOD might have said in The Quran that it is the people of knowledge, scholars, or scientist that fears GOD the most! One thing that is essential for us to keep up with what the computer is doing and for us to track our progress along with any errors is we incorporate logging mechanisms and backups. GOD in The Quran said that ‘WE used to copy what you used to do’. Essentially as the world is running, think of it as an interactive movie that is being played out in front of you, in a full-immersive non-virtual reality setting. GOD is recording it, from every angle to every thought, to every action. This brings the idea of how scary the Day of Judgment will be when one might realize that it’s going to be a fully immersive video when we would be getting and reading our book.Keywords: programming, the Quran, object orientation, computers and humans, GOD
Procedia PDF Downloads 1071266 Film Dosimetry – An Asset for Collaboration Between Cancer Radiotherapy Centers at Established Institutions and Those Located in Low- and Middle-Income Countries
Authors: A. Fomujong, P. Mobit, A. Ndlovu, R. Teboh
Abstract:
Purpose: Film’s unique qualities, such as tissue equivalence, high spatial resolution, near energy independence and comparatively less expensive dosimeter, ought to make it the preferred and widely used in radiotherapy centers in low and middle income countries (LMICs). This, however, is not always the case, as other factors that are often maybe taken for granted in advanced radiotherapy centers remain a challenge in LMICs. We explored the unique qualities of film dosimetry that can make it possible for one Institution to benefit from another’s protocols via collaboration. Methods: For simplicity, two Institutions were considered in this work. We used a single batch of films (EBT-XD) and established a calibration protocol, including scan protocols and calibration curves, using the radiotherapy delivery system at Institution A. We then proceeded and performed patient-specific QA for patients treated on system A (PSQA-A-A). Films from the same batch were then sent to a remote center for PSQA on radiotherapy delivery system B. Irradiations were done at Institution B and then returned to Institution A for processing and analysis (PSQA-B-A). The following points were taken into consideration throughout the process (a) A reference film was irradiated to a known dose on the same system irradiating the PSQA film. (b) For calibration, we utilized the one-scan protocol and maintained the same scan orientation of the calibration, PSQA and reference films. Results: Gamma index analysis using a dose threshold of 10% and 3%/2mm criteria showed a gamma passing rate of 99.8% and 100% for the PSQA-A-A and PSQA-B-A, respectively. Conclusion: This work demonstrates that one could use established film dosimetry protocols in one Institution, e.g., an advanced radiotherapy center and apply similar accuracies to irradiations performed at another institution, e.g., a center located in LMIC, which thus encourages collaboration between the two for worldwide patient benefits.Keywords: collaboration, film dosimetry, LMIC, radiotherapy, calibration
Procedia PDF Downloads 751265 Effect of Cryogenic Pre-stretching on the Room Temperature Tensile Behavior of AZ61 Magnesium Alloy and Dominant Grain Growth Mechanisms During Subsequent Annealing
Authors: Umer Masood Chaudry, Hafiz Muhammad Rehan Tariq, Chung-soo Kim, Tea-sung Jun
Abstract:
This study explored the influence of pre-stretching temperature on the microstructural characteristics and deformation behavior of AZ61 magnesium alloy and its implications on grain growth during subsequent annealing. AZ61 alloy was stretched to 5% plastic strain along rolling (RD) and transverse direction (TD) at room (RT) and cryogenic temperature (-150 oC, CT) followed by annealing at 320 oC for 1 h to investigate the twinning and dislocation evolution and its consequent effect on the flow stress, plastic strain and strain hardening rate. Compared to RT-stretched samples, significant improvement in yield stress, strain hardening rate and moderate reduction in elongation to failure were witnessed for CT-stretched samples along RD and TD. The subsequent EBSD analysis revealed the increased fraction of fine {10-12} twins and nucleation of multiple {10-12} twin variants caused by higher local stress concentration at the grain boundaries in CT-stretched samples as manifested by the kernel average misorientation. This higher twin fraction and twin-twin interaction imposed the strengthening by restricting the mean free path of dislocations, leading to higher flow stress and strain hardening rate. During annealing of the RT/CT-stretched samples, the residual strain energy and twin boundaries were decreased due to static recovery, leading to a coarse-grained twin-free microstructure. Strain induced boundary migration (SBIM) was found to be the predominant mechanism governing the grain growth during annealing via movement of high angle grain boundaries.Keywords: magnesium, twinning, twinning variant selection, EBSD, cryogenic deformation
Procedia PDF Downloads 671264 Fragility Analysis of a Soft First-Story Building in Mexico City
Authors: Rene Jimenez, Sonia E. Ruiz, Miguel A. Orellana
Abstract:
On 09/19/2017, a Mw = 7.1 intraslab earthquake occurred in Mexico causing the collapse of about 40 buildings. Many of these were 5- or 6-story buildings with soft first story; so, it is desirable to perform a structural fragility analysis of typical structures representative of those buildings and to propose a reliable structural solution. Here, a typical 5-story building constituted by regular R/C moment-resisting frames in the first story and confined masonry walls in the upper levels, similar to the collapsed structures on the 09/19/2017 Mexico earthquake, is analyzed. Three different structural solutions of the 5-story building are considered: S1) it is designed in accordance with the Mexico City Building Code-2004; S2) then, the column dimensions of the first story corresponding to S1 are reduced, and S3) viscous dampers are added at the first story of solution S2. A number of dynamic incremental analyses are performed for each structural solution, using a 3D structural model. The hysteretic behavior model of the masonry was calibrated with experiments performed at the Laboratory of Structures at UNAM. Ten seismic ground motions are used to excite the structures; they correspond to ground motions recorded in intermediate soil of Mexico City with a dominant period around 1s, where the structures are located. The fragility curves of the buildings are obtained for different values of the maximum inter-story drift demands. Results show that solutions S1 and S3 give place to similar probabilities of exceedance of a given value of inter-story drift for the same seismic intensity, and that solution S2 presents a higher probability of exceedance for the same seismic intensity and inter-story drift demand. Therefore, it is concluded that solution S3 (which corresponds to the building with soft first story and energy dissipation devices) can be a reliable solution from the structural point of view.Keywords: demand hazard analysis, fragility curves, incremental dynamic analyzes, soft-first story, structural capacity
Procedia PDF Downloads 1781263 BI- And Tri-Metallic Catalysts for Hydrogen Production from Hydrogen Iodide Decomposition
Authors: Sony, Ashok N. Bhaskarwar
Abstract:
Production of hydrogen from a renewable raw material without any co-synthesis of harmful greenhouse gases is the current need for sustainable energy solutions. The sulfur-iodine (SI) thermochemical cycle, using intermediate chemicals, is an efficient process for producing hydrogen at a much lower temperature than that required for the direct splitting of water. No net byproduct forms in the cycle. Hydrogen iodide (HI) decomposition is a crucial reaction in this cycle, as the product, hydrogen, forms only in this step. It is an endothermic, reversible, and equilibrium-limited reaction. The theoretical equilibrium conversion at 550°C is just a meagre of 24%. There is a growing interest, therefore, in enhancing the HI conversion to near-equilibrium values at lower reaction temperatures and by possibly improving the rate. The reaction is relatively slow without a catalyst, and hence catalytic decomposition of HI has gained much significance. Bi-metallic Ni-Co, Ni-Mn, Co-Mn, and tri-metallic Ni-Co-Mn catalysts over zirconia support were tested for HI decomposition reaction. The catalysts were synthesized via a sol-gel process wherein Ni was 3wt% in all the samples, and Co and Mn had equal weight ratios in the Co-Mn catalyst. Powdered X-ray diffraction and Brunauer-Emmett-Teller surface area characterizations indicated the polycrystalline nature and well-developed mesoporous structure of all the samples. The experiments were performed in a vertical laboratory-scale packed bed reactor made of quartz, and HI (55 wt%) was fed along with nitrogen at a WHSV of 12.9 hr⁻¹. Blank experiments at 500°C for HI decomposition suggested conversion of less than 5%. The activities of all the different catalysts were checked at 550°C, and the highest conversion of 23.9% was obtained with the tri-metallic 3Ni-Co-Mn-ZrO₂ catalyst. The decreasing order of the performance of catalysts could be expressed as: 3Ni-Co-Mn-ZrO₂ > 3Ni-2Co-ZrO₂ > 3Ni-2Mn-ZrO₂ > 2.5Co-2.5Mn-ZrO₂. The tri-metallic catalyst remained active till 360 mins at 550°C without any observable drop in its activity/stability. Among the explored catalyst compositions, the tri-metallic catalyst certainly has a better performance for HI conversion when compared to the bi-metallic ones. Owing to their low costs and ease of preparation, these trimetallic catalysts could be used for large-scale hydrogen production.Keywords: sulfur-iodine cycle, hydrogen production, hydrogen iodide decomposition, bi-, and tri-metallic catalysts
Procedia PDF Downloads 1871262 In Silico Exploration of Quinazoline Derivatives as EGFR Inhibitors for Lung Cancer: A Multi-Modal Approach Integrating QSAR-3D, ADMET, Molecular Docking, and Molecular Dynamics Analyses
Authors: Mohamed Moussaoui
Abstract:
A series of thirty-one potential inhibitors targeting the epidermal growth factor receptor kinase (EGFR), derived from quinazoline, underwent 3D-QSAR analysis using CoMFA and CoMSIA methodologies. The training and test sets of quinazoline derivatives were utilized to construct and validate the QSAR models, respectively, with dataset alignment performed using the lowest energy conformer of the most active compound. The best-performing CoMFA and CoMSIA models demonstrated impressive determination coefficients, with R² values of 0.981 and 0.978, respectively, and Leave One Out cross-validation determination coefficients, Q², of 0.645 and 0.729, respectively. Furthermore, external validation using a test set of five compounds yielded predicted determination coefficients, R² test, of 0.929 and 0.909 for CoMFA and CoMSIA, respectively. Building upon these promising results, eighteen new compounds were designed and assessed for drug likeness and ADMET properties through in silico methods. Additionally, molecular docking studies were conducted to elucidate the binding interactions between the selected compounds and the enzyme. Detailed molecular dynamics simulations were performed to analyze the stability, conformational changes, and binding interactions of the quinazoline derivatives with the EGFR kinase. These simulations provided deeper insights into the dynamic behavior of the compounds within the active site. This comprehensive analysis enhances the understanding of quinazoline derivatives as potential anti-cancer agents and provides valuable insights for lead optimization in the early stages of drug discovery, particularly for developing highly potent anticancer therapeuticsKeywords: 3D-QSAR, CoMFA, CoMSIA, ADMET, molecular docking, quinazoline, molecular dynamic, egfr inhibitors, lung cancer, anticancer
Procedia PDF Downloads 481261 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language
Authors: Wenjun Hou, Marek Perkowski
Abstract:
The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language
Procedia PDF Downloads 1901260 Numerical Investigation on Transient Heat Conduction through Brine-Spongy Ice
Authors: S. R. Dehghani, Y. S. Muzychka, G. F. Naterer
Abstract:
The ice accretion of salt water on cold substrates creates brine-spongy ice. This type of ice is a mixture of pure ice and liquid brine. A real case of creation of this type of ice is superstructure icing which occurs on marine vessels and offshore structures in cold and harsh conditions. Transient heat transfer through this medium causes phase changes between brine pockets and pure ice. Salt rejection during the process of transient heat conduction increases the salinity of brine pockets to reach a local equilibrium state. In this process the only effect of passing heat through the medium is not changing the sensible heat of the ice and brine pockets; latent heat plays an important role and affects the mechanism of heat transfer. In this study, a new analytical model for evaluating heat transfer through brine-spongy ice is suggested. This model considers heat transfer and partial solidification and melting together. Properties of brine-spongy ice are obtained using properties of liquid brine and pure ice. A numerical solution using Method of Lines discretizes the medium to reach a set of ordinary differential equations. Boundary conditions are chosen using one of the applicable cases of this type of ice; one side is considered as a thermally isolated surface, and the other side is assumed to be suddenly affected by a constant temperature boundary. All cases are evaluated in temperatures between -20 C and the freezing point of brine-spongy ice. Solutions are conducted using different salinities from 5 to 60 ppt. Time steps and space intervals are chosen properly to maintain the most stable and fast solution. Variation of temperature, volume fraction of brine and brine salinity versus time are the most important outputs of this study. Results show that transient heat conduction through brine-spongy ice can create a various range of salinity of brine pockets from the initial salinity to that of 180 ppt. The rate of variation of temperature is found to be slower for high salinity cases. The maximum rate of heat transfer occurs at the start of the simulation. This rate decreases as time passes. Brine pockets are smaller at portions closer to the colder side than that of the warmer side. A the start of the solution, the numerical solution tends to increase instabilities. This is because of sharp variation of temperature at the start of the process. Changing the intervals improves the unstable situation. The analytical model using a numerical scheme is capable of predicting thermal behavior of brine spongy ice. This model and numerical solutions are important for modeling the process of freezing of salt water and ice accretion on cold structures.Keywords: method of lines, brine-spongy ice, heat conduction, salt water
Procedia PDF Downloads 2171259 The Effects of Nanoemulsions Based on Commercial Oils for the Quality of Vacuum-Packed Sea Bass at 2±2°C
Authors: Mustafa Durmuş, Yesim Ozogul, Esra Balıkcı, Saadet Gokdoğan, Fatih Ozogul, Ali Rıza Köşker, İlknur Yuvka
Abstract:
Food scientists and researchers have paid attention to develop new ways for improving the nutritional value of foods. The application of nanotechnology techniques to the food industry may allow the modification of food texture, taste, sensory attributes, coloring strength, processability, and stability during shelf life of products. In this research, the effects of nanoemulsions based on commercial oils for vacuum-packed sea bass fillets stored at 2±2°C were investigated in terms of the sensory, chemical (total volatile basic nitrogen (TVB-N), thiobarbituric acid (TBA), peroxide value (PV) and free fatty acids (FFA), pH, water holding capacity (WHC)) and microbiological qualities (total anaerobic bacteria and total lactic acid bacteria). Physical properties of emulsions (viscosity, the particle size of droplet, thermodynamic stability, refractive index, and surface tension) were determined. Nanoemulsion preparation method was based on high energy principle, with ultrasonic homojenizator. Sensory analyses of raw fish showed that the demerit points of the control group were found higher than those of treated groups. The sensory score (odour, taste and texture) of the cooked fillets decreased with storage time, especially in the control. Results obtained from chemical and microbiological analyses also showed that nanoemulsions significantly (p<0.05) decreased the values of biochemical parameters and growth of bacteria during storage period, thus improving quality of vacuum-packed sea bass.Keywords: quality parameters, nanoemulsion, sea bass, shelf life, vacuum packing
Procedia PDF Downloads 4591258 Microstructure and Mechanical Properties of Low Alloy Steel with Double Austenitizing Tempering Heat Treatment
Authors: Jae-Ho Jang, Jung-Soo Kim, Byung-Jun Kim, Dae-Geun Nam, Uoo-Chang Jung, Yoon-Suk Choi
Abstract:
Low alloy steels are widely used for pressure vessels, spent fuel storage, and steam generators required to withstand the internal pressure and prevent unexpected failure in nuclear power plants, which these may suffer embrittlement by high levels of radiation and heat for a long period. Therefore, it is important to improve mechanical properties of low alloy steels for the integrity of structure materials at an early stage of fabrication. Recently, it showed that a double austenitizing and tempering (DTA) process resulted in a significant improvement of strength and toughness by refinement of prior austenite grains. In this study, it was investigated that the mechanism of improving mechanical properties according to the change of microstructure by the second fully austenitizing temperature of the DAT process for low alloy steel required the structural integrity. Compared to conventional single austenitizing and tempering (SAT) process, the tensile elongation properties have improved about 5%, DBTTs have obtained result in reduction of about -65℃, and grain size has decreased by about 50% in the DAT process conditions. Grain refinement has crack propagation interference effect due to an increase of the grain boundaries and amount of energy absorption at low temperatures. The higher first austenitizing temperature in the DAT process, the more increase the spheroidized carbides and strengthening the effect of fine precipitates in the ferrite grain. The area ratio of the dimple in the transition area has increased by proportion to the effect of spheroidized carbides. This may the primary mechanisms that can improve low-temperature toughness and elongation while maintaining a similar hardness and strength.Keywords: double austenitizing, Ductile Brittle transition temperature, grain refinement, heat treatment, low alloy steel, low-temperature toughness
Procedia PDF Downloads 5101257 Study on the Geometric Similarity in Computational Fluid Dynamics Calculation and the Requirement of Surface Mesh Quality
Authors: Qian Yi Ooi
Abstract:
At present, airfoil parameters are still designed and optimized according to the scale of conventional aircraft, and there are still some slight deviations in terms of scale differences. However, insufficient parameters or poor surface mesh quality is likely to occur if these small deviations are embedded in a future civil aircraft with a size that is quite different from conventional aircraft, such as a blended-wing-body (BWB) aircraft with future potential, resulting in large deviations in geometric similarity in computational fluid dynamics (CFD) simulations. To avoid this situation, the study on the CFD calculation on the geometric similarity of airfoil parameters and the quality of the surface mesh is conducted to obtain the ability of different parameterization methods applied on different airfoil scales. The research objects are three airfoil scales, including the wing root and wingtip of conventional civil aircraft and the wing root of the giant hybrid wing, used by three parameterization methods to compare the calculation differences between different sizes of airfoils. In this study, the constants including NACA 0012, a Reynolds number of 10 million, an angle of attack of zero, a C-grid for meshing, and the k-epsilon (k-ε) turbulence model are used. The experimental variables include three airfoil parameterization methods: point cloud method, B-spline curve method, and class function/shape function transformation (CST) method. The airfoil dimensions are set to 3.98 meters, 17.67 meters, and 48 meters, respectively. In addition, this study also uses different numbers of edge meshing and the same bias factor in the CFD simulation. Studies have shown that with the change of airfoil scales, different parameterization methods, the number of control points, and the meshing number of divisions should be used to improve the accuracy of the aerodynamic performance of the wing. When the airfoil ratio increases, the most basic point cloud parameterization method will require more and larger data to support the accuracy of the airfoil’s aerodynamic performance, which will face the severe test of insufficient computer capacity. On the other hand, when using the B-spline curve method, average number of control points and meshing number of divisions should be set appropriately to obtain higher accuracy; however, the quantitative balance cannot be directly defined, but the decisions should be made repeatedly by adding and subtracting. Lastly, when using the CST method, it is found that limited control points are enough to accurately parameterize the larger-sized wing; a higher degree of accuracy and stability can be obtained by using a lower-performance computer.Keywords: airfoil, computational fluid dynamics, geometric similarity, surface mesh quality
Procedia PDF Downloads 2221256 Territorial Brand as a Means of Structuring the French Wood Industry
Authors: Laetitia Dari
Abstract:
The brand constitutes a source of differentiation between competitors. It highlights specific characteristics that create value for the enterprise. Today the concept of a brand is not just about the product but can concern territories. The competition between territories, due to tourism, research, jobs, etc., leads territories to develop territorial brands to bring out their identity and specificity. Some territorial brands are based on natural resources or products characteristic of a territory. In the French wood sector, we can observe the emergence of many territorial brands. Supported by the inter-professional organization, these brands have the main objective of showcasing wood as a source of solutions at the local level in terms of construction and energy. The implementation of these collective projects raises the question of the way in which relations between companies are structured and animated. The central question of our work is to understand how the territorial brand promotes the structuring of a sector and the construction of collective relations between actors. In other words, we are interested in the conditions for the emergence of the territorial brand and the way in which it will be a means of mobilizing the actors around a common project. The objectives of the research are (1) to understand in which context a territorial brand emerges, (2) to analyze the way in which the territorial brand structures the collective relations between actors, (3) to give entry keys to the actors to successfully develop this type of project. Thus, our research is based on a qualitative methodology with semi-structured interviews conducted with the main territorial brands in France. The research will answer various academic and empirical questions. From an academic point of view, it brings elements of understanding to the construction of a collective project and to the way in which governance operates. From an empirical point of view, the interest of our work is to bring out the key success factors in the development of a territorial brand and how the brand can become an element of valuation for a territory.Keywords: brand, marketing, strategy, territory, third party stakeholder, wood
Procedia PDF Downloads 671255 The Structure and Function Investigation and Analysis of the Automatic Spin Regulator (ASR) in the Powertrain System of Construction and Mining Machines with the Focus on Dump Trucks
Authors: Amir Mirzaei
Abstract:
The powertrain system is one of the most basic and essential components in a machine. The occurrence of motion is practically impossible without the presence of this system. When power is generated by the engine, it is transmitted by the powertrain system to the wheels, which are the last parts of the system. Powertrain system has different components according to the type of use and design. When the force generated by the engine reaches to the wheels, the amount of frictional force between the tire and the ground determines the amount of traction and non-slip or the amount of slip. At various levels, such as icy, muddy, and snow-covered ground, the amount of friction coefficient between the tire and the ground decreases dramatically and considerably, which in turn increases the amount of force loss and the vehicle traction decreases drastically. This condition is caused by the phenomenon of slipping, which, in addition to the waste of energy produced, causes the premature wear of driving tires. It also causes the temperature of the transmission oil to rise too much, as a result, causes a reduction in the quality and become dirty to oil and also reduces the useful life of the clutches disk and plates inside the transmission. this issue is much more important in road construction and mining machinery than passenger vehicles and is always one of the most important and significant issues in the design discussion, in order to overcome. One of these methods is the automatic spin regulator system which is abbreviated as ASR. The importance of this method and its structure and function have solved one of the biggest challenges of the powertrain system in the field of construction and mining machinery. That this research is examined.Keywords: automatic spin regulator, ASR, methods of reducing slipping, methods of preventing the reduction of the useful life of clutches disk and plate, methods of preventing the premature dirtiness of transmission oil, method of preventing the reduction of the useful life of tires
Procedia PDF Downloads 791254 Comparative Study on Productivity, Chemical Composition and Yield Quality of Some Alternative Crops in Romanian Organic Farming
Authors: Maria Toader, Gheorghe Valentin Roman, Alina Maria Ionescu
Abstract:
Crops diversity and maintaining and enhancing the fertility of agricultural lands are basic principles of organic farming. With a wider range of crops in agroecosystem can improve the ability to control weeds, pests and diseases, and the performance of crops rotation and food safety. In this sense, the main objective of the research was to study the productivity and chemical composition of some alternative crops and their adaptability to soil and climatic conditions of the agricultural area in Southern Romania and to cultivation in the organic farming system. The alternative crops were: lentil (7 genotypes); five species of grain legumes (5 genotypes); four species of oil crops (5 genotypes). The seed production was, on average: 1343 kg/ha of lentil; 2500 kg/ha of field beans; 2400 kg/ha of chick peas and blackeyed peas; more than 2000 kg/ha of atzuki beans, over 1250 kg/ha of fenugreek; 2200 kg/ha of safflower; 570 kg/ha of oil pumpkin; 2150 kg/ha of oil flax; 1518 kg/ha of camelina. Regarding chemical composition, lentil seeds contained: 22.18% proteins, 3.03% lipids, 33.29% glucides, 4.00% minerals, and 259.97 kcal energy values. For field beans: 21.50% proteins, 4.40% lipids, 63.90% glucides, 5.85% minerals, 395.36 kcal energetic value. For chick peas: 21.23% proteins, 4.55% lipids, 53.00% glucides, 3.67% minerals, 348.22 kcal energetic value. For blackeyed peas: 23.30% proteins, 2.10% lipids, 68.10% glucides, 3.93% minerals, 350.14 kcal energetic value. For adzuki beans: 21.90% proteins, 2.60% lipids, 69.30% glucides, 4.10% minerals, 402.48 kcal energetic value. For fenugreek: 21.30% proteins, 4.65% lipids, 63.83% glucides, 5.69% minerals, 396.54 kcal energetic value. For safflower: 12.60% proteins, 28.37% lipids, 46.41% glucides, 3.60% minerals, 505.78 kcal energetic value. For camelina: 20.29% proteins, 31.68% lipids, 36.28% glucides, 4.29% minerals, 526.63 kcal energetic value. For oil pumpkin: 29.50% proteins, 36.92% lipids, 18.50% glucides, 5.41% minerals, 540.15 kcal energetic value. For oil flax: 22.56% proteins, 34.10% lipids, 27.73% glucides, 5.25% minerals, 558.45 kcal energetic value.Keywords: adaptability, alternative crops, chemical composition, organic farming productivity
Procedia PDF Downloads 5161253 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 3971252 DIF-JACKET: a Thermal Protective Jacket for Firefighters
Authors: Gilda Santos, Rita Marques, Francisca Marques, João Ribeiro, André Fonseca, João M. Miranda, João B. L. M. Campos, Soraia F. Neves
Abstract:
Every year, an unacceptable number of firefighters are seriously burned during firefighting operations, with some of them eventually losing their life. Although thermal protective clothing research and development has been searching solutions to minimize firefighters heat load and skin burns, currently commercially available solutions focus in solving isolated problems, for example, radiant heat or water-vapor resistance. Therefore, episodes of severe burns and heat strokes are still frequent. Taking this into account, a consortium composed by Portuguese entities has joined synergies to develop an innovative protective clothing system by following a procedure based on the application of numerical models to optimize the design and using a combinationof protective clothing components disposed in different layers. Recently, it has been shown that Phase Change Materials (PCMs) can contribute to the reduction of potential heat hazards in fire extinguish operations, and consequently, their incorporation into firefighting protective clothing has advantages. The greatest challenge is to integrate these materials without compromising garments ergonomics and, at the same time, accomplishing the International Standard of protective clothing for firefighters – laboratory test methods and performance requirements for wildland firefighting clothing. The incorporation of PCMs into the firefighter's protective jacket will result in the absorption of heat from the fire and consequently increase the time that the firefighter can be exposed to it. According to the project studies and developments, to favor a higher use of the PCM storage capacityand to take advantage of its high thermal inertia more efficiently, the PCM layer should be closer to the external heat source. Therefore, in this stage, to integrate PCMs in firefighting clothing, a mock-up of a vest specially designed to protect the torso (back, chest and abdomen) and to be worn over a fire-resistant jacketwas envisaged. Different configurations of PCMs, as well as multilayer approaches, were studied using suitable joining technologies such as bonding, ultrasound, and radiofrequency. Concerning firefighter’s protective clothing, it is important to balance heat protection and flame resistance with comfort parameters, namely, thermaland water-vapor resistances. The impact of the most promising solutions regarding thermal comfort was evaluated to refine the performance of the global solutions. Results obtained with experimental bench scale model and numerical simulation regarding the integration of PCMs in a vest designed as protective clothing for firefighters will be presented.Keywords: firefighters, multilayer system, phase change material, thermal protective clothing
Procedia PDF Downloads 1631251 Mixotropohic Growth of Chlorella sp. on Raw Food Processing Industrial Wastewater: Effect of COD Tolerance
Authors: Suvidha Gupta, R. A. Pandey, Sanjay Pawar
Abstract:
The effluents from various food processing industries are found with high BOD, COD, suspended solids, nitrate, and phosphate. Mixotrophic growth of microalgae using food processing industrial wastewater as an organic carbon source has emerged as more effective and energy intensive means for the nutrient removal and COD reduction. The present study details the treatment of non-sterilized unfiltered food processing industrial wastewater by microalgae for nutrient removal as well as to determine the tolerance to COD by taking different dilutions of wastewater. In addition, the effect of different inoculum percentages of microalgae on removal efficiency of the nutrients for given dilution has been studied. To see the effect of dilution and COD tolerance, the wastewater having initial COD 5000 mg/L (±5), nitrate 28 mg/L (±10), and phosphate 24 mg/L (±10) was diluted to get COD of 3000 mg/L and 1000 mg/L. The experiments were carried out in 1L conical flask by intermittent aeration with different inoculum percentage i.e. 10%, 20%, and 30% of Chlorella sp. isolated from nearby area of NEERI, Nagpur. The experiments were conducted for 6 days by providing 12:12 light- dark period and determined various parameters such as COD, TOC, NO3-- N, PO4-- P, and total solids on daily basis. Results revealed that, for 10% and 20% inoculum, over 90% COD and TOC reduction was obtained with wastewater containing COD of 3000 mg/L whereas over 80% COD and TOC reduction was obtained with wastewater containing COD of 1000 mg/L. Moreover, microalgae was found to tolerate wastewater containing COD 5000 mg/L and obtained over 60% and 80% reduction in COD and TOC respectively. The obtained results were found similar with 10% and 20% inoculum in all COD dilutions whereas for 30% inoculum over 60% COD and 70% TOC reduction was obtained. In case of nutrient removal, over 70% nitrate removal and 45% phosphate removal was obtained with 20% inoculum in all dilutions. The obtained results indicated that Microalgae assisted nutrient removal gives maximum COD and TOC reduction with 3000 mg/L COD and 20% inoculum. Hence, microalgae assisted wastewater treatment is not only effective for removal of nutrients but also can tolerate high COD up to 5000 mg/L and solid content.Keywords: Chlorella sp., chemical oxygen demand, food processing industrial wastewater, mixotrophic growth
Procedia PDF Downloads 3331250 A Literature Review on the Use of Information and Communication Technology within and between Emergency Medical Teams during a Disaster
Authors: Badryah Alshehri, Kevin Gormley, Gillian Prue, Karen McCutcheon
Abstract:
In a disaster event, sharing patient information between the pre-hospitals Emergency Medical Services (EMS) and Emergency Department (ED) hospitals is a complex process during which important information may be altered or lost due to poor communication. The aim of this study was to critically discuss the current evidence base in relation to communication between pre-EMS hospital and ED hospital professionals by the use of Information and Communication Systems (ICT). This study followed the systematic approach; six electronic databases were searched: CINAHL, Medline, Embase, PubMed, Web of Science, and IEEE Xplore Digital Library were comprehensively searched in January 2018 and a second search was completed in April 2020 to capture more recent publications. The study selection process was undertaken independently by the study authors. Both qualitative and quantitative studies were chosen that focused on factors which are positively or negatively associated with coordinated communication between pre-hospital EMS and ED teams in a disaster event. These studies were assessed for quality and the data were analysed according to the key screening themes which emerged from the literature search. Twenty-two studies were included. Eleven studies employed quantitative methods, seven studies used qualitative methods, and four studies used mixed methods. Four themes emerged on communication between EMTs (pre-hospital EMS and ED staff) in a disaster event using the ICT. (1) Disaster preparedness plans and coordination. This theme reported that disaster plans are in place in hospitals, and in some cases, there are interagency agreements with pre-hospital and relevant stakeholders. However, the findings showed that the disaster plans highlighted in these studies lacked information regarding coordinated communications within and between the pre-hospital and hospital. (2) Communication systems used in the disaster. This theme highlighted that although various communication systems are used between and within hospitals and pre-hospitals, technical issues have influenced communication between teams during disasters. (3) Integrated information management systems. This theme suggested the need for an integrated health information system which can help pre-hospital and hospital staff to record patient data and ensure the data is shared. (4) Disaster training and drills. While some studies analysed disaster drills and training, the majority of these studies were focused on hospital departments other than EMTs. These studies suggest the need for simulation disaster training and drills, including EMTs. This review demonstrates that considerable gaps remain in the understanding of the communication between the EMS and ED hospitals staff in relation to response in disasters. The review shows that although different types of ICTs are used, various issues remain which affect coordinated communication among the relevant professionals.Keywords: communication, emergency communication services, emergency medical teams, emergency physicians, emergency nursing, paramedics, information and communication technology, communication systems
Procedia PDF Downloads 861249 Reinforcing Effects of Natural Micro-Particles on the Dynamic Impact Behaviour of Hybrid Bio-Composites Made of Short Kevlar Fibers Reinforced Thermoplastic Composite Armor
Authors: Edison E. Haro, Akindele G. Odeshi, Jerzy A. Szpunar
Abstract:
Hybrid bio-composites are developed for use in protective armor through positive hybridization offered by reinforcement of high-density polyethylene (HDPE) with Kevlar short fibers and palm wood micro-fillers. The manufacturing process involved a combination of extrusion and compression molding techniques. The mechanical behavior of Kevlar fiber reinforced HDPE with and without palm wood filler additions are compared. The effect of the weight fraction of the added palm wood micro-fillers is also determined. The Young modulus was found to increase as the weight fraction of organic micro-particles increased. However, the flexural strength decreased with increasing weight fraction of added micro-fillers. The interfacial interactions between the components were investigated using scanning electron microscopy. The influence of the size, random alignment and distribution of the natural micro-particles was evaluated. Ballistic impact and dynamic shock loading tests were performed to determine the optimum proportion of Kevlar short fibers and organic micro-fillers needed to improve impact strength of the HDPE. These results indicate a positive hybridization by deposition of organic micro-fillers on the surface of short Kevlar fibers used in reinforcing the thermoplastic matrix leading to enhancement of the mechanical strength and dynamic impact behavior of these materials. Therefore, these hybrid bio-composites can be promising materials for different applications against high velocity impacts.Keywords: hybrid bio-composites, organic nano-fillers, dynamic shocking loading, ballistic impacts, energy absorption
Procedia PDF Downloads 1131248 Ni Mixed Oxides Type-Spinel for Energy: Application in Dry Reforming of Methane for Syngas (H2 and CO) Production
Authors: Bedarnia Ishak
Abstract:
In the recent years, the dry reforming of methane has received considerable attention from an environmental view point because it consumes and eliminates two gases (CH4 and CO2) responsible for global warming by greenhouse effect. Many catalysts containing noble metal (Rh, Ru, Pd, Pt and Ir) or transition metal (Ni, Co and Fe) have been reported to be active in this reaction. Compared to noble metals, Ni-materials are cheap but very easily deactivated by coking. Ni-based mixed oxides structurally well-defined like perovskites and spinels are being studied because they possibly make solid solutions and allow to vary the composition and thus the performances properties. In this work, nano-sized nickel ferrite oxides are synthesized using three different methods: Co-precipitation (CP), hydrothermal (HT) and sol gel (SG) methods and characterized by XRD, Raman, XPS, BET, TPR, SEM-EDX and TEM-EDX. XRD patterns of all synthesized oxides showed the presence of NiFe2O4 spinel, confirmed by Raman spectroscopy. Hematite was present only in CP sample. Depending on the synthesis method, the surface area, particle size, as well as the surface Ni/Fe atomic ratio (XPS) and the behavior upon reduction varied. The materials were tested in methane dry reforming with CO2 at 1 atm and 650-800 °C. The catalytic activity of the spinel samples was not very high (XCH4 = 5-20 mol% and XCO2 = 25-40 mol %) when no pre-reduction step was carried out. A significant contribution of RWGS explained the low values of H2/CO ratio obtained. The reoxidation step of the catalyst carried out after reaction showed little amounts of coke deposition. The reducing pretreatment was particularly efficient in the case of SG (XCH4 = 80 mol% and XCO2 = 92 mol%, at 800 °C), with H2/CO > 1. In conclusion, the influence of preparation was strong for most samples and the catalytic behavior could be interpreted by considering the distribution of cations among octahedral (Oh) and tetrahedral (Td) sites as in (Ni2+1-xFe3+x) Td (Ni2+xFe3+2-x) OhO2-4 influenced the reducibility of materials and thus their catalytic performance.Keywords: NiFe2O4, dry reforming of methane, spinel oxide, oxide zenc
Procedia PDF Downloads 2821247 Hydrodynamics of Periphyton Biofilters in Recirculating Aquaculture
Authors: Adam N. Bell, Sarina J. Ergas, Michael Nystrom, Nathan P. Brennan, Kevan L. Main
Abstract:
Integrated Multi-Trophic Aquaculture systems (IMTA) have the potential to improve the sustainability of seafood production, generate organic fertilizer and feed, remove waste discharges and reduce energy use. IMTA can include periphyton biofilters where algae and microbes grow on surfaces, along with caught detritus and amphipods. Periphyton biofilters provide many advantages: nitrification, denitrification, primary production and ecological diversity. The goal of this study was to determine how biofilter hydraulic residence time (τ) effects periphyton biomass production, dissolved oxygen (DO) and nutrient removal. A pilot scale recirculating aquaculture system (RAS) was designed, constructed and operated at different hydraulic residence times (τ= 1, 2, 4, 6, 8 hours per tank). For each τ, a conservative tracer study was conducted to investigate system hydrodynamics. Data on periphyton weights, pH, nitrogen species, phosphorus, temperature and DO were collected. The tracer study for τ =1 hour revealed that the normalized time < τ, indicating short-circuiting. Periphyton biomass production rate was relatively unaffected by τ (R_e<1 for all τ). Average ammonia nitrogen removal was > 75% for all trials. Nitrate and nitrite did not accumulate in the RAS for τ≥4 hours due to enhanced denitrification in anoxic zones. For τ≥4 hours DO concentration was at a maximum of 4 mg L-1 after 14:00, and decreased to 0 mg L-1 during nighttime. At τ=1 hour, the RAS stayed > 2 mg L-1 and DO was more evenly distributed. For the validation trial, the culture tank was stocked with Centropomus undecimalis (common snook) and the system was operated at τ= 1 hr. Preliminary results showed that a RAS with an integrated periphyton biofilter could support fish health with low nutrient concentrations DO > 6 mg L-1.Keywords: sustainable aquaculture, resource recovery, nitrogen, microalgae, hydrodynamics, integrated multi-trophic aquaculture
Procedia PDF Downloads 1311246 Radioactivity Assessment of Sediments in Negombo Lagoon Sri Lanka
Authors: H. M. N. L. Handagiripathira
Abstract:
The distributions of naturally occurring and anthropogenic radioactive materials were determined in surface sediments taken at 27 different locations along the bank of Negombo Lagoon in Sri Lanka. Hydrographic parameters of lagoon water and the grain size analyses of the sediment samples were also carried out for this study. The conductivity of the adjacent water was varied from 13.6 mS/cm to 55.4 mS/cm near to the southern end and the northern end of the lagoon, respectively, and equally salinity levels varied from 7.2 psu to 32.1 psu. The average pH in the water was 7.6 and average water temperature was 28.7 °C. The grain size analysis emphasized the mass fractions of the samples as sand (60.9%), fine sand (30.6%) and fine silt+clay (1.3%) in the sampling locations. The surface sediment samples of wet weight, 1 kg each from upper 5-10 cm layer, were oven dried at 105 °C for 24 hours to get a constant weight, homogenized and sieved through a 2 mm sieve (IAEA technical series no. 295). The radioactivity concentrations were determined using gamma spectrometry technique. Ultra Low Background Broad Energy High Purity Ge Detector, BEGe (Model BE5030, Canberra) was used for radioactivity measurement with Canberra Industries' Laboratory Source-less Calibration Software (LabSOCS) mathematical efficiency calibration approach and Geometry composer software. The mean activity concentration was found to be 24 ± 4, 67 ± 9, 181 ± 10, 59 ± 8, 3.5 ± 0.4 and 0.47 ± 0.08 Bq/kg for 238U, 232Th, 40K, 210Pb, 235U and 137Cs respectively. The mean absorbed dose rate in air, radium equivalent activity, external hazard index, annual gonadal dose equivalent and annual effective dose equivalent were 60.8 nGy/h, 137.3 Bq/kg, 0.4, 425.3 mSv/year and 74.6 mSv/year, respectively. The results of this study will provide baseline information on the natural and artificial radioactive isotopes and environmental pollution associated with information on radiological risk.Keywords: gamma spectrometry, lagoon, radioactivity, sediments
Procedia PDF Downloads 1391245 Enriched Education: The Classroom as a Learning Network through Video Game Narrative Development
Authors: Wayne DeFehr
Abstract:
This study is rooted in a pedagogical approach that emphasizes student engagement as fundamental to meaningful learning in the classroom. This approach creates a paradigmatic shift, from a teaching practice that reinforces the teacher’s central authority to a practice that disperses that authority among the students in the classroom through networks that they themselves develop. The methodology of this study about creating optimal conditions for learning in the classroom includes providing a conceptual framework within which the students work, as well as providing clearly stated expectations for work standards, content quality, group methodology, and learning outcomes. These learning conditions are nurtured in a variety of ways. First, nearly every class includes a lecture from the professor with key concepts that students need in order to complete their work successfully. Secondly, students build on this scholarly material by forming their own networks, where students face each other and engage with each other in order to collaborate their way to solving a particular problem relating to the course content. Thirdly, students are given short, medium, and long-term goals. Short term goals relate to the week’s topic and involve workshopping particular issues relating to that stage of the course. The medium-term goals involve students submitting term assignments that are evaluated according to a well-defined rubric. And finally, long-term goals are achieved by creating a capstone project, which is celebrated and shared with classmates and interested friends on the final day of the course. The essential conclusions of the study are drawn from courses that focus on video game narrative. Enthusiastic student engagement is created not only with the dynamic energy and expertise of the instructor, but also with the inter-dependence of the students on each other to build knowledge, acquire skills, and achieve successful results.Keywords: collaboration, education, learning networks, video games
Procedia PDF Downloads 1151244 The Renewal of Chinese Urban Village on Cultural Ecology: Hubei Village as an Example
Authors: Shaojun Zheng, Lei Xu, Yunzi Wang
Abstract:
The main purpose of the research is to use the cultural ecology to analyze the renewal of Shenzhen urban village in the process of China's urbanization and to evaluate and guide the renewal, which will combine the society value and economic efficiency and activate urban villages. The urban village has a long history. There are also many old buildings, various residents, and a strong connection with the surrounding environment. Cultural ecology, which uses the knowledge of ecology to study culture, provides us a cultural perspective in the renewal. We take Hubei village in Shenzhen as our example. By using cultural ecology, we find a new way dealing with the relationship between culture and other factors. It helps us to give the buildings and space the culture meanings from different scales. It enables us to find a unique development pattern of urban village. After analyzing several famous cultural blocks cases, we find it is possible to connect the unique culture of urban village with the renovation of its buildings, community, and commerce. We propose the following strategies with specific target: 1. Building renovation: We repair and rebuild the origin buildings as little as possible, and retain the original urban space tissue as much as possible to keep the original sense of place and the cultural atmosphere. 2. Community upgrade: We reshape the village stream, fix the original function, add event which will activate people to complete the existing cultural circle 3. District commerce: We implant food and drink district, boutique commercial, and creative industries, to make full use of the historical atmosphere of the site to enhance the culture feelings For the renewal of a seemingly chaotic mixed urban village, it is important to break out from the conventional practices of building shopping malls or residential towers. Without creating those building landmarks, cultural ecology activates the urban village by exploiting its unique culture, which makes the old and new combine and becomes a new stream of energy, forming the new cultural, commercial and stylish landmark of the city.Keywords: cultural ecology, urban village, renewal, combination
Procedia PDF Downloads 3921243 Soil Bioremediation Monitoring Systems Powered by Microbial Fuel Cells
Authors: András Fülöp, Lejla Heilmann, Zsolt Szabó, Ákos Koós
Abstract:
Microbial fuel cells (MFCs) present a sustainable biotechnological solution to future energy demands. The aim of this study was to construct soil based, single cell, membrane-less MFC systems, operated without treatment to continuously power on-site monitoring and control systems during the soil bioremediation processes. Our Pseudomonas aeruginosa 541 isolate is an ideal choice for MFCs, because it is able to produce pyocyanin which behaves as electron-shuttle molecule, furthermore, it also has a significant antimicrobial effect. We tested several materials and structural configurations to obtain long term high power output. Comparing different configurations, a proton exchange membrane-less, 0.6 m long with 0.05 m diameter MFC tubes offered the best long-term performances. The long-term electricity production were tested from starch, yeast extract (YE), carboxymethyl cellulose (CMC) with humic acid (HA) as a mediator. In all cases, 3 kΩ external load have been used. The two best-operated systems were the Pseudomonas aeruginosa 541 containing MFCs with 1 % carboxymethyl cellulose and the MFCs with 1% yeast extract in the anode area and 35% hydrogel in the cathode chamber. The first had 3.3 ± 0.033 mW/m2 and the second had 4.1 ± 0.065 mW/m2 power density values. These systems have operated for 230 days without any treatment. The addition of 0.2 % HA and 1 % YE referred to the volume of the anode area resulted in 1.4 ± 0.035 mW/m2 power densities. The mixture of 1% starch with 0.2 % HA gave 1.82 ± 0.031 mW/m2. Using CMC as retard carbon source takes effect in the long-term bacterial survivor, thus enable the expression of the long term power output. The application of hydrogels in the cathode chamber significantly increased the performance of the MFC units due to their good water retention capacity.Keywords: microbial fuel cell, bioremediation, Pseudomonas aeruginosa, biotechnological solution
Procedia PDF Downloads 2911242 Modeling Vegetation Phenological Characteristics of Terrestrial Ecosystems
Authors: Zongyao Sha
Abstract:
Green vegetation plays a vital role in energy flows and matter cycles in terrestrial ecosystems, and vegetation phenology may not only be influenced by but also impose active feedback on climate changes. The phenological events of vegetation, such as the start of the season (SOS), end of the season (EOS), and length of the season (LOS), can respond to climate changes and affect gross primary productivity (GPP). Here we coupled satellite remote sensing imagery with FLUXNET observations to systematically map the shift of SOS, EOS, and LOS in global vegetated areas and explored their response to climate fluctuations and feedback on GPP during the last two decades. Results indicated that SOS advanced significantly, at an average rate of 0.19 days/year at a global scale, particularly in the northern hemisphere above the middle latitude (≥30°N) and that EOS was slightly delayed during the past two decades, resulting in prolonged LOS in 72.5% of the vegetated area. The climate factors, including seasonal temperature and precipitation, are attributed to the shifts in vegetation phenology but with a high spatial and temporal difference. The study revealed interactions between vegetation phenology and climate changes. Both temperature and precipitation affect vegetation phenology. Higher temperature as a direct consequence of global warming advanced vegetation green-up date. On the other hand, 75.9% and 20.2% of the vegetated area showed a positive correlation and significant positive correlation between annual GPP and length of vegetation growing season (LOS), likely indicating an enhancing effect on vegetation productivity and thus increased carbon uptake from the shifted vegetation phenology. Our study highlights a comprehensive view of the vegetation phenology changes of the global terrestrial ecosystems during the last two decades. The interactions between the shifted vegetation phenology and climate changes may provide useful information for better understanding the future trajectory of global climate changes. The feedback on GPP from the shifted vegetation phenology may serve as an adaptation mechanism for terrestrial ecosystems to mitigate global warming through improved carbon uptake from the atmosphere.Keywords: vegetation phenology, growing season, NPP, correlation analysis
Procedia PDF Downloads 1021241 Chemical and Physical Properties and Biocompatibility of Ti–6Al–4V Produced by Electron Beam Rapid Manufacturing and Selective Laser Melting for Biomedical Applications
Authors: Bing–Jing Zhao, Chang-Kui Liu, Hong Wang, Min Hu
Abstract:
Electron beam rapid manufacturing (EBRM) or Selective laser melting is an additive manufacturing process that uses 3D CAD data as a digital information source and energy in the form of a high-power laser beam or electron beam to create three-dimensional metal parts by fusing fine metallic powders together.Object:The present study was conducted to evaluate the mechanical properties ,the phase transformation,the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM,SLM and forging technique.Method: Ti-6Al-4V alloy standard test pieces were manufactured by EBRM, SLM and forging technique according to AMS4999,GB/T228 and ISO 10993.The mechanical properties were analyzed by universal test machine. The phase transformation was analyzed by X-ray diffraction and scanning electron microscopy. The corrosivity was analyzed by electrochemical method. The biocompatibility was analyzed by co-culturing with mesenchymal stem cell and analyzed by scanning electron microscopy (SEM) and alkaline phosphatase assay (ALP) to evaluate cell adhesion and differentiation, respectively. Results: The mechanical properties, the phase transformation, the corrosivity and the biocompatibility of Ti-6Al-4V by EBRM、SLM were similar to forging and meet the mechanical property requirements of AMS4999 standard. aphase microstructure for the EBM production contrast to the a’phase microstructure of the SLM product. Mesenchymal stem cell adhesion and differentiation were well. Conclusion: The property of the Ti-6Al-4V alloy manufactured by EBRM and SLM technique can meet the medical standard from this study. But some further study should be proceeded in order to applying well in clinical practice.Keywords: 3D printing, Electron Beam Rapid Manufacturing (EBRM), Selective Laser Melting (SLM), Computer Aided Design (CAD)
Procedia PDF Downloads 4541240 Influence of UV Aging on the Mechanical Properties of Polycarbonate
Authors: S. Redjala, N. Ait Hocine, M. Gratton, N. Poirot, R. Ferhoum, S. Azem
Abstract:
Polycarbonate (PC) is a promising polymer with high transparency in the range of the visible spectrum and is used in various fields, for example medical, electronic, automotive. Its low weight, chemical inertia, high impact resistance and relatively low cost are of major importance. In recent decades, some materials such as metals and ceramics have been replaced by polymers because of their superior advantages. However, some characteristics of the polymers are highly modified under the effect of ultraviolet (UV) radiation and temperature. The changes induced in the material by such aging depend on the exposure time, the wavelength of the UV radiation and the temperature level. The UV energy is sufficient to break the chemical bonds leading to a cleavage of the molecular chains. This causes changes in the mechanical, thermal, optical and morphological properties of the material. The present work is focused on the study of the effects of aging under ultraviolet (UV) radiation and under different temperature values on the physical-chemical and mechanical properties of a PC. Thus, various investigations, such as FTIR and XRD analyses, SEM and optical microscopy observations, micro-hardness measurements and monotonic and cyclic tensile tests, were carried out on the PC in the initial state and after aging. Results have shown the impact of aging on the properties of the PC studied. In fact, the MEB highlighted changes in the superficial morphology of the material by the presence of cracks and material de-bonding in the form of debris. The FTIR spectra reveal an attenuation of the peaks like the hydroxyl (OH) groups located at 3520 cm-1. The XRD lines shift towards a larger angle, reaching a maximum of 3°. In addition, Vickers micro-hardness measurements show that aging affects the surface and the core of the material, which results in different mechanical behaviours under monotonic and cyclic tensile tests. This study pointed out effects of aging on the macroscopic properties of the PC studied, in relationship with its microstructural changes.Keywords: mechanical properties, physical-chemical properties, polycarbonate, UV aging, temperature aging
Procedia PDF Downloads 1421239 Two Dimensional Steady State Modeling of Temperature Profile and Heat Transfer of Electrohydrodynamically Enhanced Micro Heat Pipe
Authors: H. Shokouhmand, M. Tajerian
Abstract:
A numerical investigation of laminar forced convection flows through a square cross section micro heat pipe by applying electrohydrodynamic (EHD) field has been studied. In the present study, pentane is selected as working fluid. Temperature and velocity profiles and heat transfer enhancement in the micro heat pipe by using EHD field at the two-dimensional and single phase fluid flow in steady state regime have been numerically calculated. At this model, only Coulomb force is considered. The study has been carried out for the Reynolds number 10 to 100 and EHD force field up to 8 KV. Coupled, non-linear equations governed on the model (continuity, momentum, and energy equations) have been solved simultaneously by CFD numerical methods. Steady state behavior of affecting parameters, e.g. friction factor, average temperature, Nusselt number and heat transfer enhancement criteria, have been evaluated. It has been observed that by increasing Reynolds number, the effect of EHD force became more significant and for smaller Reynolds numbers the rate of heat transfer enhancement criteria is increased. By obtaining and plotting the mentioned parameters, it has been shown that the EHD field enhances the heat transfer process. The numerical results show that by increasing EHD force field the absolute value of Nusselt number and friction factor increases and average temperature of fluid flow decreases. But the increasing rate of Nusselt number is greater than increasing value of friction factor, which makes applying EHD force field for heat transfer enhancement in micro heat pipes acceptable and applicable. The numerical results of model are in good agreement with the experimental results available in the literature.Keywords: micro heat pipe, electrohydrodynamic force, Nusselt number, average temperature, friction factor
Procedia PDF Downloads 2721238 Promises versus Realities: A Critical Assessment of the Integrated Design Process
Authors: Firdous Nizar, Carmela Cucuzzella
Abstract:
This paper explores how the integrated design process (IDP) was adopted for an architectural project. The IDP is a relatively new approach to collaborative design in architectural design projects in Canada. It has gained much traction recently as the closest possible approach to the successful management of low energy building projects and has been advocated as a productive method for multi-disciplinary collaboration within complex projects. This study is based on the premise that there are explicit and implicit dimensions of power within the integrated design process (IDP) in the green building industry that may or may not lead to irreconcilable differences in a process that demands consensus. To gain insight on the potential gap between the theoretical promises and practical realities of the IDP, a review of existing IDP literature is compared with a case study analysis of a competition-based architectural project in Canada, a first to incorporate the IDP in its overall design format. This paper aims to address the undertheorized power relations of the IDP in a real project. It presents a critical assessment through the lens of the combined theories of deliberative democracy by Jürgen Habermas, with that of agonistic pluralism by political theorist Chantal Mouffe. These two theories are intended to more appropriately embrace the conflictual situations in collaborative environments, and shed light on the relationships of power, between engineers, city officials, architects, and designers in this conventional consensus-based model. In addition, propositions for a shift in approach that embraces conflictual differences among its participants are put forth based on concepts of critical spatial practice by Markus Meissen. As IDP is a relatively new design process, it requires much deliberation on its structure from the theoretical framework built in this paper in order to unlock its true potential.Keywords: agonistic pluralism, critical spatial practice, deliberative democracy, integrated design process
Procedia PDF Downloads 173