Search results for: fire detector
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 897

Search results for: fire detector

117 Optimized Deep Learning-Based Facial Emotion Recognition System

Authors: Erick C. Valverde, Wansu Lim

Abstract:

Facial emotion recognition (FER) system has been recently developed for more advanced computer vision applications. The ability to identify human emotions would enable smart healthcare facility to diagnose mental health illnesses (e.g., depression and stress) as well as better human social interactions with smart technologies. The FER system involves two steps: 1) face detection task and 2) facial emotion recognition task. It classifies the human expression in various categories such as angry, disgust, fear, happy, sad, surprise, and neutral. This system requires intensive research to address issues with human diversity, various unique human expressions, and variety of human facial features due to age differences. These issues generally affect the ability of the FER system to detect human emotions with high accuracy. Early stage of FER systems used simple supervised classification task algorithms like K-nearest neighbors (KNN) and artificial neural networks (ANN). These conventional FER systems have issues with low accuracy due to its inefficiency to extract significant features of several human emotions. To increase the accuracy of FER systems, deep learning (DL)-based methods, like convolutional neural networks (CNN), are proposed. These methods can find more complex features in the human face by means of the deeper connections within its architectures. However, the inference speed and computational costs of a DL-based FER system is often disregarded in exchange for higher accuracy results. To cope with this drawback, an optimized DL-based FER system is proposed in this study.An extreme version of Inception V3, known as Xception model, is leveraged by applying different network optimization methods. Specifically, network pruning and quantization are used to enable lower computational costs and reduce memory usage, respectively. To support low resource requirements, a 68-landmark face detector from Dlib is used in the early step of the FER system.Furthermore, a DL compiler is utilized to incorporate advanced optimization techniques to the Xception model to improve the inference speed of the FER system. In comparison to VGG-Net and ResNet50, the proposed optimized DL-based FER system experimentally demonstrates the objectives of the network optimization methods used. As a result, the proposed approach can be used to create an efficient and real-time FER system.

Keywords: deep learning, face detection, facial emotion recognition, network optimization methods

Procedia PDF Downloads 78
116 Single Cell Oil of Oleaginous Fungi from Lebanese Habitats as a Potential Feed Stock for Biodiesel

Authors: M. El-haj, Z. Olama, H. Holail

Abstract:

Single cell oils (SCOs) accumulated by oleaginous fungi have emerged as a potential alternative feedstock for biodiesel production. Five fungal strains were isolated from the Lebanese environment namely Fusarium oxysporum, Mucor hiemalis, Penicillium citrinum, Aspergillus tamari, and Aspergillus niger that have been selected among 39 oleaginous strains for their potential ability to accumulate lipids (lipid content was more than 40% on dry weight basis). Wide variations were recorded in the environmental factors that lead to maximum lipid production by fungi under test and were cultivated under submerged fermentation on medium containing glucose as a carbon source. The maximum lipid production was attained within 6-8 days, at pH range 6-7, 24 to 48 hours age of seed culture, 4 to 6.107 spores/ml inoculum level and 100 ml culture volume. Eleven culture conditions were examined for their significance on lipid production using Plackett-Burman factorial design. Reducing sugars and nitrogen source were the most significant factors affecting lipid production process. Maximum lipid yield was noticed with 15.62, 14.48, 12.75, 13.68 and 20.41g/l for Fusarium oxysporum, Mucor hiemalis, Penicillium citrinum, Aspergillus tamari, and Aspergillus niger respectively. A verification experiment was carried out to examine model validation and revealed more than 94% validity. The profile of extracted lipids from each fungal isolate was studied using thin layer chromatography (TLC) indicating the presence of monoacylglycerols, diaacylglycerols, free fatty acids, triacylglycerols and sterol esters. The fatty acids profiles were also determined by gas-chromatography coupled with flame ionization detector (GC-FID). Data revealed the presence of significant amount of oleic acid (29-36%), palmitic acid (18-24%), linoleic acid (26.8-35%), and low amount of other fatty acids in the extracted fungal oils which indicate that the fatty acid profiles were quite similar to that of conventional vegetable oil. The cost of lipid production could be further reduced with acid-pretreated lignocellulotic corncob waste, whey and date molasses to be utilized as the raw material for the oleaginous fungi. The results showed that the microbial lipid from the studied fungi was a potential alternative resource for biodiesel production.

Keywords: agro-industrial waste products, biodiesel, fatty acid, single cell oil, Lebanese environment, oleaginous fungi

Procedia PDF Downloads 371
115 Variability of the X-Ray Sun during Descending Period of Solar Cycle 23

Authors: Zavkiddin Mirtoshev, Mirabbos Mirkamalov

Abstract:

We have analyzed the time series of full disk integrated soft X-ray (SXR) and hard X-ray (HXR) emission from the solar corona during 2004 January 1 to 2009 December 31, covering the descending phase of solar cycle 23. We employed the daily X-ray index (DXI) derived from X-ray observations from the Solar X-ray Spectrometer (SOXS) mission in four different energy bands: 4-5.5; 5.5-7.5 keV (SXR) and 15-20; 20-25 keV (HXR). The application of Lomb-Scargle periodogram technique to the DXI time series observed by the Silicium detector in the energy bands reveals several short and intermediate periodicities of the X-ray corona. The DXI explicitly show the periods of 13.6 days, 26.7 days, 128.5 days, 151 days, 180 days, 220 days, 270 days, 1.24 year and 1.54 year periods in SXR as well as in HXR energy bands. Although all periods are above 70% confidence level in all energy bands, they show strong power in HXR emission in comparison to SXR emission. These periods are distinctly clear in three bands but somehow not unambiguously clear in 5.5-7.5 keV band. This might be due to the presence of Ferrum and Ferrum/Niccolum line features, which frequently vary with small scale flares like micro-flares. The regular 27-day rotation and 13.5 day period of sunspots from the invisible side of the Sun are found stronger in HXR band relative to SXR band. However, flare activity Rieger periods (150 and 180 days) and near Rieger period 220 days are very strong in HXR emission which is very much expected. On the other hand, our current study reveals strong 270 day periodicity in SXR emission which may be connected with tachocline, similar to a fundamental rotation period of the Sun. The 1.24 year and 1.54 year periodicities, represented from the present research work, are well observable in both SXR as well as in HXR channels. These long-term periodicities must also have connection with tachocline and should be regarded as a consequence of variation in rotational modulation over long time scales. The 1.24 year and 1.54 year periods are also found great importance and significance in the life formation and it evolution on the Earth, and therefore they also have great astro-biological importance. We gratefully acknowledge support by the Indian Centre for Space Science and Technology Education in Asia and the Pacific (CSSTEAP, the Centre is affiliated to the United Nations), Physical Research Laboratory (PRL) at Ahmedabad, India. This work has done under the supervision of Prof. Rajmal Jain and paper consist materials of pilot project and research part of the M. Tech program which was made during Space and Atmospheric Science Course.

Keywords: corona, flares, solar activity, X-ray emission

Procedia PDF Downloads 323
114 Climate Change Effects of Vehicular Carbon Monoxide Emission from Road Transportation in Part of Minna Metropolis, Niger State, Nigeria

Authors: H. M. Liman, Y. M. Suleiman A. A. David

Abstract:

Poor air quality often considered one of the greatest environmental threats facing the world today is caused majorly by the emission of carbon monoxide into the atmosphere. The principal air pollutant is carbon monoxide. One prominent source of carbon monoxide emission is the transportation sector. Not much was known about the emission levels of carbon monoxide, the primary pollutant from the road transportation in the study area. Therefore, this study assessed the levels of carbon monoxide emission from road transportation in the Minna, Niger State. The database shows the carbon monoxide data collected. MSA Altair gas alert detector was used to take the carbon monoxide emission readings in Parts per Million for the peak and off-peak periods of vehicular movement at the road intersections. Their Global Positioning System (GPS) coordinates were recorded in the Universal Transverse Mercator (UTM). Bar chart graphs were plotted by using the emissions level of carbon dioxide as recorded on the field against the scientifically established internationally accepted safe limit of 8.7 Parts per Million of carbon monoxide in the atmosphere. Further statistical analysis was also carried out on the data recorded from the field using the Statistical Package for Social Sciences (SPSS) software and Microsoft excel to show the variance of the emission levels of each of the parameters in the study area. The results established that emissions’ level of atmospheric carbon monoxide from the road transportation in the study area exceeded the internationally accepted safe limits of 8.7 parts per million. In addition, the variations in the average emission levels of CO between the four parameters showed that morning peak is having the highest average emission level of 24.5PPM followed by evening peak with 22.84PPM while morning off peak is having 15.33 and the least is evening off peak 12.94PPM. Based on these results, recommendations made for poor air quality mitigation via carbon monoxide emissions reduction from transportation include Introduction of the urban mass transit would definitely reduce the number of traffic on the roads, hence the emissions from several vehicles that would have been on the road. This would also be a cheaper means of transportation for the masses and Encouraging the use of vehicles using alternative sources of energy like solar, electric and biofuel will also result in less emission levels as the these alternative energy sources other than fossil fuel originated diesel and petrol vehicles do not emit especially carbon monoxide.

Keywords: carbon monoxide, climate change emissions, road transportation, vehicular

Procedia PDF Downloads 350
113 Study of Polyphenol Profile and Antioxidant Capacity in Italian Ancient Apple Varieties by Liquid Chromatography

Authors: A. M. Tarola, R. Preti, A. M. Girelli, P. Campana

Abstract:

Safeguarding, studying and enhancing biodiversity play an important and indispensable role in re-launching agriculture. The ancient local varieties are therefore a precious resource for genetic and health improvement. In order to protect biodiversity through the recovery and valorization of autochthonous varieties, in this study we analyzed 12 samples of four ancient apple cultivars representative of Friuli Venezia Giulia, selected by local farmers who work on a project for the recovery of ancient apple cultivars. The aim of this study is to evaluate the polyphenolic profile and the antioxidant capacity that characterize the organoleptic and functional qualities of this fruit species, besides having beneficial properties for health. In particular, for each variety, the following compounds were analyzed, both in the skins and in the pulp: gallic acid, catechin, chlorogenic acid, epicatechin, caffeic acid, coumaric acid, ferulic acid, rutin, phlorizin, phloretin and quercetin to highlight any differences in the edible parts of the apple. The analysis of individual phenolic compounds was performed by High Performance Liquid Chromatography (HPLC) coupled with a diode array UV detector (DAD), the antioxidant capacity was estimated using an in vitro essay based on a Free Radical Scavenging Method and the total phenolic compounds was determined using the Folin-Ciocalteau method. From the results, it is evident that the catechins are the most present polyphenols, reaching a value of 140-200 μg/g in the pulp and of 400-500 μg/g in the skin, with the prevalence of epicatechin. Catechins and phlorizin, a dihydrohalcone typical of apples, are always contained in larger quantities in the peel. Total phenolic compounds content was positively correlated with antioxidant activity in apple pulp (r2 = 0,850) and peel (r2 = 0,820). Comparing the results, differences between the varieties analyzed and between the edible parts (pulp and peel) of the apple were highlighted. In particular, apple peel is richer in polyphenolic compounds than pulp and flavonols are exclusively present in the peel. In conclusion, polyphenols, being antioxidant substances, have confirmed the benefits of fruit in the diet, especially as a prevention and treatment for degenerative diseases. They demonstrated to be also a good marker for the characterization of different apple cultivars. The importance of protecting biodiversity in agriculture was also highlighted through the exploitation of native products and ancient varieties of apples now forgotten.

Keywords: apple, biodiversity, polyphenols, antioxidant activity, HPLC-DAD, characterization

Procedia PDF Downloads 116
112 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range

Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva

Abstract:

Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.

Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability

Procedia PDF Downloads 117
111 Plasma Technology for Hazardous Biomedical Waste Treatment

Authors: V. E. Messerle, A. L. Mosse, O. A. Lavrichshev, A. N. Nikonchuk, A. B. Ustimenko

Abstract:

One of the most serious environmental problems today is pollution by biomedical waste (BMW), which in most cases has undesirable properties such as toxicity, carcinogenicity, mutagenicity, fire. Sanitary and hygienic survey of typical solid BMW, made in Belarus, Kazakhstan, Russia and other countries shows that their risk to the environment is significantly higher than that of most chemical wastes. Utilization of toxic BMW requires use of the most universal methods to ensure disinfection and disposal of any of their components. Such technology is a plasma technology of BMW processing. To implement this technology a thermodynamic analysis of the plasma processing of BMW was fulfilled and plasma-box furnace was developed. The studies have been conducted on the example of the processing of bone. To perform thermodynamic calculations software package Terra was used. Calculations were carried out in the temperature range 300 - 3000 K and a pressure of 0.1 MPa. It is shown that the final products do not contain toxic substances. From the organic mass of BMW synthesis gas containing combustible components 77.4-84.6% was basically produced, and mineral part consists mainly of calcium oxide and contains no carbon. Degree of gasification of carbon reaches 100% by the temperature 1250 K. Specific power consumption for BMW processing increases with the temperature throughout its range and reaches 1 kWh/kg. To realize plasma processing of BMW experimental installation with DC plasma torch of 30 kW power was developed. The experiments allowed verifying the thermodynamic calculations. Wastes are packed in boxes weighing 5-7 kg. They are placed in the box furnace. Under the influence of air plasma flame average temperature in the box reaches 1800 OC, the organic part of the waste is gasified and inorganic part of the waste is melted. The resulting synthesis gas is continuously withdrawn from the unit through the cooling and cleaning system. Molten mineral part of the waste is removed from the furnace after it has been stopped. Experimental studies allowed determining operating modes of the plasma box furnace, the exhaust gases was analyzed, samples of condensed products were assembled and their chemical composition was determined. Gas at the outlet of the plasma box furnace has the following composition (vol.%): CO - 63.4, H2 - 6.2, N2 - 29.6, S - 0.8. The total concentration of synthesis gas (CO + H2) is 69.6%, which agrees well with the thermodynamic calculation. Experiments confirmed absence of the toxic substances in the final products.

Keywords: biomedical waste, box furnace, plasma torch, processing, synthesis gas

Procedia PDF Downloads 494
110 Analysing the Perception of Climate Hazards on Biodiversity Conservation in Mining Landscapes within Southwestern Ghana

Authors: Salamatu Shaibu, Jan Hernning Sommer

Abstract:

Integrating biodiversity conservation practices in mining landscapes ensures the continual provision of various ecosystem services to the dependent communities whilst serving as ecological insurance for corporate mining when purchasing reclamation security bonds. Climate hazards such as long dry seasons, erratic rainfall patterns, and extreme weather events contribute to biodiversity loss in addition to the impact due to mining. Both corporate mining and mine-fringe communities perceive the effect of climate on biodiversity from the context of the benefits they accrue, which motivate their conservation practices. In this study, pragmatic approaches including semi-structured interviews, field visual observation, and review were used to collect data on corporate mining employees and households of fringing communities in the southwestern mining hub. The perceived changes in the local climatic conditions and the consequences on environmental management practices that promote biodiversity conservation were examined. Using a thematic content analysis tool, the result shows that best practices such as concurrent land rehabilitation, reclamation ponds, artificial wetlands, land clearance, and topsoil management are directly affected by prolonging long dry seasons and erratic rainfall patterns. Excessive dust and noise generation directly affect both floral and faunal diversity coupled with excessive fire outbreaks in rehabilitated lands and nearby forest reserves. Proposed adaptive measures include engaging national conservation authorities to promote reforestation projects around forest reserves. National government to desist from using permit for mining concessions in forest reserves, engaging local communities through educational campaigns to control forest encroachment and burning, promoting community-based resource management to promote community ownership, and provision of stricter environmental legislation to compel corporate, artisanal, and small scale mining companies to promote biodiversity conservation.

Keywords: biodiversity conservation, climate hazards, corporate mining, mining landscapes

Procedia PDF Downloads 182
109 The Verification Study of Computational Fluid Dynamics Model of the Aircraft Piston Engine

Authors: Lukasz Grabowski, Konrad Pietrykowski, Michal Bialy

Abstract:

This paper presents the results of the research to verify the combustion in aircraft piston engine Asz62-IR. This engine was modernized and a type of ignition system was developed. Due to the high costs of experiments of a nine-cylinder 1,000 hp aircraft engine, a simulation technique should be applied. Therefore, computational fluid dynamics to simulate the combustion process is a reasonable solution. Accordingly, the tests for varied ignition advance angles were carried out and the optimal value to be tested on a real engine was specified. The CFD model was created with the AVL Fire software. The engine in the research had two spark plugs for each cylinder and ignition advance angles had to be set up separately for each spark. The results of the simulation were verified by comparing the pressure in the cylinder. The courses of the indicated pressure of the engine mounted on a test stand were compared. The real course of pressure was measured with an optical sensor, mounted in a specially drilled hole between the valves. It was the OPTRAND pressure sensor, which was designed especially to engine combustion process research. The indicated pressure was measured in cylinder no 3. The engine was running at take-off power. The engine was loaded by a propeller at a special test bench. The verification of the CFD simulation results was based on the results of the test bench studies. The course of the simulated pressure obtained is within the measurement error of the optical sensor. This error is 1% and reflects the hysteresis and nonlinearity of the sensor. The real indicated pressure measured in the cylinder and the pressure taken from the simulation were compared. It can be claimed that the verification of CFD simulations based on the pressure is a success. The next step was to research on the impact of changing the ignition advance timing of spark plugs 1 and 2 on a combustion process. Moving ignition timing between 1 and 2 spark plug results in a longer and uneven firing of a mixture. The most optimal point in terms of indicated power occurs when ignition is simultaneous for both spark plugs, but so severely separated ignitions are assured that ignition will occur at all speeds and loads of engine. It should be confirmed by a bench experiment of the engine. However, this simulation research enabled us to determine the optimal ignition advance angle to be implemented into the ignition control system. This knowledge allows us to set up the ignition point with two spark plugs to achieve as large power as possible.

Keywords: CFD model, combustion, engine, simulation

Procedia PDF Downloads 331
108 Frequency Interpretation of a Wave Function, and a Vertical Waveform Treated as A 'Quantum Leap'

Authors: Anthony Coogan

Abstract:

Born’s probability interpretation of wave functions would have led to nearly identical results had he chosen a frequency interpretation instead. Logically, Born may have assumed that only one electron was under consideration, making it nonsensical to propose a frequency wave. Author’s suggestion: the actual experimental results were not of a single electron; rather, they were groups of reflected x-ray photons. The vertical waveform used by Scrhödinger in his Particle in the Box Theory makes sense if it was intended to represent a quantum leap. The author extended the single vertical panel to form a bar chart: separate panels would represent different energy levels. The proposed bar chart would be populated by reflected photons. Expansion of basic ideas: Part of Scrhödinger’s ‘Particle in the Box’ theory may be valid despite negative criticism. The waveform used in the diagram is vertical, which may seem absurd because real waves decay at a measurable rate, rather than instantaneously. However, there may be one notable exception. Supposedly, following from the theory, the Uncertainty Principle was derived – may a Quantum Leap not be represented as an instantaneous waveform? The great Scrhödinger must have had some reason to suggest a vertical waveform if the prevalent belief was that they did not exist. Complex wave forms representing a particle are usually assumed to be continuous. The actual observations made were x-ray photons, some of which had struck an electron, been reflected, and then moved toward a detector. From Born’s perspective, doing similar work the years in question 1926-7, he would also have considered a single electron – leading him to choose a probability distribution. Probability Distributions appear very similar to Frequency Distributions, but the former are considered to represent the likelihood of future events. Born’s interpretation of the results of quantum experiments led (or perhaps misled) many researchers into claiming that humans can influence events just by looking at them, e.g. collapsing complex wave functions by 'looking at the electron to see which slit it emerged from', while in reality light reflected from the electron moved in the observer’s direction after the electron had moved away. Astronomers may say that they 'look out into the universe' but are actually using logic opposed to the views of Newton and Hooke and many observers such as Romer, in that light carries information from a source or reflector to an observer, rather the reverse. Conclusion: Due to the controversial nature of these ideas, especially its implications about the nature of complex numbers used in applications in science and engineering, some time may pass before any consensus is reached.

Keywords: complex wave functions not necessary, frequency distributions instead of wave functions, information carried by light, sketch graph of uncertainty principle

Procedia PDF Downloads 169
107 Robotic Solution for Nuclear Facility Safety and Monitoring System

Authors: Altab Hossain, Shakerul Islam, Golamur R. Khan, Abu Zafar M. Salahuddin

Abstract:

An effective identification of breakdowns is of premier importance for the safe and reliable operation of Nuclear Power Plants (NPP) and its associated facilities. A great number of monitoring and diagnosis methodologies are applied and used worldwide in areas such as industry, automobiles, hospitals, and power plant to detect and reduce human disasters. The potential consequences of several hazardous activities may harm the society using nuclear and its associated facilities. Hence, one of the most popular and effective methods to ensure safety and monitor the entire nuclear facility and imply risk-free operation without human interference during the hazardous situation is using a robot. Therefore, in this study, an advanced autonomous robot has been designed and developed that can monitor several parameters in the NPP to ensure the safety and do some risky job in case of nuclear disaster. The robot consisted of autonomous track following unit, data processing and transmitting unit can follow a straight line and take turn as the bank greater than 90 degrees. The developed robot can analyze various parameters such as temperature, altitude, radiation, obstacle, humidity, detecting fire, measuring distance, ultrasonic scan and taking the heat of any particular object. It has an ability to broadcast live stream and can record the document to its own server memory. There is a separate control unit constructed with a baseboard which processes the recorded data and a transmitter which transmits the processed data. To make the robot user-friendly, the code is developed such a way that a user can control any of robotic arm as per types of work. To control at any place and without the track, there is an advanced code has been developed to take manual overwrite. Through this process, administrator who has logged in permission to Dynamic Host Client Protocol (DHCP) can make the handover of the control of the robot. In this process, this robot is provided maximum nuclear security from being hacked. Not only NPP, this robot can be used to maximize the real-time monitoring system of any nuclear facility as well as nuclear material transportation and decomposition system.

Keywords: nuclear power plant, radiation, dynamic host client protocol, nuclear security

Procedia PDF Downloads 182
106 Environmental Photodegradation of Tralkoxydim Herbicide and Its Formulation in Natural Waters

Authors: María José Patiño-Ropero, Manuel Alcamí, Al Mokhtar Lamsabhi, José Luis Alonso-Prados, Pilar Sandín-España

Abstract:

Tralkoxydim, commercialized under different trade names, among them Splendor® (25% active ingredient), is a cyclohexanedione herbicide used in wheat and barley fields for the post-emergence control of annual winter grass weeds. Due to their physicochemical properties, herbicides belonging to this family are known to be susceptible to reaching natural waters, where different degradation pathways can take place. Photolysis represents one of the main routes of abiotic degradation of these herbicides in water. This transformation pathway can lead to the formation of unknown by-products, which could be more toxic and/or persistent than the active substances themselves. Therefore, there is a growing need to understand the science behind such dissipation routes, which is key to estimating the persistence of these compounds and ensuring the accurate assessment of environmental behavior. However, to our best knowledge, any information regarding the photochemical behavior of tralkoxydim under natural conditions in an aqueous environment has not been available till now in the literature. This work has focused on investigating the photochemical behavior of tralkoxydim herbicide and its commercial formulation (Splendor®) in the ultrapure, river and spring water using simulated solar radiation. Besides, the evolution of detected degradation products formed in the samples has been studied. A reversed-phase HPLC-DAD (high-performance liquid chromatography with diode array detector) method was developed to evaluate the kinetic evolution and to obtain the half-lives. In both cases, the degradation rates of active ingredient tralkoxydim in natural waters were lower than in ultrapure water following the order; river water < spring water < ultrapure water, and with first-order half-life values of 5.1 h, 2.7 h and 1.1 h, respectively. These findings indicate that the photolytical behavior of active ingredients is largely affected by the water composition, and these components can exert an internal filter effect. In addition, tralkoxydim herbicide and its formulation showed the same half-lives for each one of the types of water studied, showing that the presence of adjuvants in the commercial formulation has not any effect on the degradation rates of the active ingredient. HPLC-MS (high-performance liquid chromatography with mass spectrometry) experiments were performed to study the by-products deriving from the photodegradation of tralkoxydim in water. Accordingly, three compounds were tentatively identified. These results provide a better understanding of the tralkoxydim herbicide behavior in natural waters and its fate in the environment.

Keywords: by-products, natural waters, photodegradation, tralkoxydim herbicide

Procedia PDF Downloads 56
105 Survey of Indoor Radon/Thoron Concentrations in High Lung Cancer Incidence Area in India

Authors: Zoliana Bawitlung, P. C. Rohmingliana, L. Z. Chhangte, Remlal Siama, Hming Chungnunga, Vanram Lawma, L. Hnamte, B. K. Sahoo, B. K. Sapra, J. Malsawma

Abstract:

Mizoram state has the highest lung cancer incidence rate in India due to its high-level consumption of tobacco and its products which is supplemented by the food habits. While smoking is mainly responsible for this incidence, the effect of inhalation of indoor radon gas cannot be discarded as the hazardous nature of this radioactive gas and its progenies on human population have been well-established worldwide where the radiation damage to bronchial cells eventually can be the second leading cause of lung cancer next to smoking. It is also known that the effect of radiation, however, small may be the concentration, cannot be neglected as they can bring about the risk of cancer incidence. Hence, estimation of indoor radon concentration is important to give a useful reference against radiation effects as well as establishing its safety measures and to create a baseline for further case-control studies. The indoor radon/thoron concentrations in Mizoram had been measured in 41 dwellings selected on the basis of spot gamma background radiation and construction type of the houses during 2015-2016. The dwellings were monitored for one year, in 4 months cycles to indicate seasonal variations, for the indoor concentration of radon gas and its progenies, outdoor gamma dose, and indoor gamma dose respectively. A time-integrated method using Solid State Nuclear Track Detector (SSNTD) based single entry pin-hole dosimeters were used for measurement of indoor Radon/Thoron concentration. Gamma dose measurements for indoor as well as outdoor were carried out using Geiger Muller survey meters. Seasonal variation of indoor radon/ thoron concentration was monitored. The results show that the annual average radon concentrations varied from 54.07 – 144.72 Bq/m³ with an average of 90.20 Bq/m³ and the annual average thoron concentration varied from 17.39 – 54.19 Bq/m³ with an average of 35.91 Bq/m³ which are below the permissible limit. The spot survey of gamma background radiation level varies between 9 to 24 µR/h inside and outside the dwellings throughout Mizoram which are all within acceptable limits. From the above results, there is no direct indication that radon/thoron is responsible for the high lung cancer incidence in the area. In order to find epidemiological evidence of natural radiations to high cancer incidence in the area, one may need to conduct a case-control study which is beyond this scope. However, the derived data of measurement will provide baseline data for further studies.

Keywords: background gamma radiation, indoor radon/thoron, lung cancer, seasonal variation

Procedia PDF Downloads 115
104 Developing Improvements to Multi-Hazard Risk Assessments

Authors: A. Fathianpour, M. B. Jelodar, S. Wilkinson

Abstract:

This paper outlines the approaches taken to assess multi-hazard assessments. There is currently confusion in assessing multi-hazard impacts, and so this study aims to determine which of the available options are the most useful. The paper uses an international literature search, and analysis of current multi-hazard assessments and a case study to illustrate the effectiveness of the chosen method. Findings from this study will help those wanting to assess multi-hazards to undertake a straightforward approach. The paper is significant as it helps to interpret the various approaches and concludes with the preferred method. Many people in the world live in hazardous environments and are susceptible to disasters. Unfortunately, when a disaster strikes it is often compounded by additional cascading hazards, thus people would confront more than one hazard simultaneously. Hazards include natural hazards (earthquakes, floods, etc.) or cascading human-made hazards (for example, Natural Hazard Triggering Technological disasters (Natech) such as fire, explosion, toxic release). Multi-hazards have a more destructive impact on urban areas than one hazard alone. In addition, climate change is creating links between different disasters such as causing landslide dams and debris flows leading to more destructive incidents. Much of the prevailing literature deals with only one hazard at a time. However, recently sophisticated multi-hazard assessments have started to appear. Given that multi-hazards occur, it is essential to take multi-hazard risk assessment under consideration. This paper aims to review the multi-hazard assessment methods through articles published to date and categorize the strengths and disadvantages of using these methods in risk assessment. Napier City is selected as a case study to demonstrate the necessity of using multi-hazard risk assessments. In order to assess multi-hazard risk assessments, first, the current multi-hazard risk assessment methods were described. Next, the drawbacks of these multi-hazard risk assessments were outlined. Finally, the improvements to current multi-hazard risk assessments to date were summarised. Generally, the main problem of multi-hazard risk assessment is to make a valid assumption of risk from the interactions of different hazards. Currently, risk assessment studies have started to assess multi-hazard situations, but drawbacks such as uncertainty and lack of data show the necessity for more precise risk assessment. It should be noted that ignoring or partial considering multi-hazards in risk assessment will lead to an overestimate or overlook in resilient and recovery action managements.

Keywords: cascading hazards, disaster assessment, mullti-hazards, risk assessment

Procedia PDF Downloads 84
103 Antibacterial Effects of Some Medicinal and Aromatic Plant Extracts on Pathogenic Bacteria Isolated from Pear Orchards

Authors: Kubilay Kurtulus Bastas

Abstract:

Bacterial diseases are very destructive and cause economic losses on pears. Promising plant extracts for the management of plant diseases are environmentally safe, long-lasting and extracts of certain plants contain alkaloids, tannins, quinones, coumarins, phenolic compounds, and phytoalexins. In this study, bacteria were isolated from different parts of pear exhibiting characteristic symptoms of bacterial diseases from the Central Anatolia, Turkey. Pathogenic bacteria were identified by morphological, physiological, biochemical and molecular methods as fire blight (Erwinia amylovora (39%)), bacterial blossom blast and blister bark (Pseudomonas syringae pv. syringae (22%)), crown gall (Rhizobium radiobacter (1%)) from different pear cultivars, and determined virulence levels of the pathogens with pathogenicity tests. The air-dried 25 plant material was ground into fine powder and extraction was performed at room temperature by maceration with 80% (v/v) methanol/distilled water. The minimum inhibitory concentration (MIC) values were determined by using modified disc diffusion method at five different concentrations and streptomycin sulphate was used as control chemical. Bacterial suspensions were prepared as 108 CFU ml⁻¹ densities and 100 µl bacterial suspensions were spread to TSA medium. Antimicrobial activity was evaluated by measuring the inhibition zones in reference to the test organisms. Among the tested plants, Origanum vulgare, Hedera helix, Satureja hortensis, Rhus coriaria, Eucalyptus globulus, Rosmarinus officinalis, Ocimum basilicum, Salvia officinalis, Cuminum cyminum and Thymus vulgaris showed a good antibacterial activity and they inhibited the growth of the pathogens with inhibition zone diameter ranging from 7 to 27 mm at 20% (w/v) in absolute methanol in vitro conditions. In vivo, the highest efficacy was determined as 27% on reducing tumor formation of R. radiobacter, and 48% and 41% on reducing shoot blight of E. amylovora and P. s. pv. syringae on pear seedlings, respectively. Obtaining data indicated that some plant extracts may be used against the bacterial diseases on pome fruits within sustainable and organic management programs.

Keywords: bacteria, eco-friendly management, organic, pear, plant extract

Procedia PDF Downloads 294
102 Research on the Planning Spatial Mode of China's Overseas Industrial Park

Authors: Sidong Zhao, Xingping Wang

Abstract:

Recently, the government of China has provided strong support the developments of overseas industrial parks. The global distribution of China overseas industrial parks has gradually moved from the 'sparks of fire' to the 'prairie fires.' The support and distribution have promoted developing overseas industrial parks to a strategy of constructing a China's new open economic system and a typical representative of the 'Chinese wisdom' and the 'China's plans' that China has contributed to the globalization of the new era under the initiative of the Belt and Road. As the industrial parks are the basis of 'work/employment', a basic function of a city (Athens Constitution), planning for developments of industrial parks has become a long-term focus of urban planning. Based on the research of the planning and the analysis of the present developments of some typical China overseas industrial parks, we found some interesting rules: First, large numbers of the China overseas industrial parks are located in less developed countries. These industrial parks have become significant drives of the developments of the host cities and even the regions in those countries, especially in investment, employment and paid tax fee for the local, etc. so, the planning and development of overseas industrial parks have received extensive attention. Second, there are some problems in the small part of the overseas Park, such as the planning of the park not following the planning of the host city and lack of implementation of the park planning, etc. These problems have led to the difficulties of the implementation of the planning and the sustainable developments of the parks. Third, a unique pattern of space development has been formed. in the dimension of the patterns of regional spatial distribution, there are five characteristics - along with the coast, along the river, along with the main traffic lines and hubs, along with the central urban area and along the connections of regions economic. In the dimension of the spatial relationships between the industrial park and the city, there is a growing and evolving trend as 'separation – integration - union'. In the dimension of spatial mode of the industrial parks, there are different patterns of development, such as a specialized industrial park, complex industrial park, characteristic town and new urban area of industry, etc. From the perspective of the trends of the developments and spatial modes, in the future, the planning of China overseas industrial parks should emphasize the idea of 'building a city based on the industrial park'. In other words, it's making the developments of China overseas industrial parks move from 'driven by policy' to 'driven by the functions of the city', accelerating forming the system of China overseas industrial parks and integrating the industrial parks and the cities.

Keywords: overseas industrial park, spatial mode, planning, China

Procedia PDF Downloads 159
101 Improvements of the Difficulty in Hospital Acceptance at the Scene by the Introduction of Smartphone Application for Emergency-Medical-Service System: A Population-Based Before-And-After Observation Study in Osaka City, Japan

Authors: Yusuke Katayama, Tetsuhisa Kitamura, Kosuke Kiyohara, Sumito Hayashida, Taku Iwami, Takashi Kawamura, Takeshi Shimazu

Abstract:

Background: Recently, the number of ambulance dispatches has been increasing in Japan and it is, therefore, difficult to accept emergency patients to hospitals smoothly and appropriately because of the limited hospital capacity. To facilitate the request for patient transport by ambulances and hospital acceptance, the emergency information system using information technology has been built up and introduced in various communities. However, its effectiveness has not been insufficiently revealed in Japan. In 2013, we developed a smartphone application system that enables the emergency-medical-service (EMS) personnel to share information about on-scene ambulance and hospital situation. The aim of this study was to assess the introduction effect of this application for EMS system in Osaka City, Japan. Methods: This study was a retrospective study with population-based ambulance records of Osaka Municipal Fire Department. This study period was six years from January 1, 2010 to December 31, 2015. In this study, we enrolled emergency patients that on-scene EMS personnel conducted the hospital selection for them. The main endpoint was difficulty in hospital acceptance at the scene. The definition of difficulty in hospital acceptance at the scene was to make >=5 phone calls by EMS personnel at the scene to each hospital until a decision to transport was determined. The definition of the smartphone application group was emergency patients transported in the period of 2013-2015 after the introduction of this application, and we assessed the introduction effect of smartphone application with multivariable logistic regression model. Results: A total of 600,526 emergency patients for whom EMS personnel selected hospitals were eligible for our analysis. There were 300,131 smartphone application group (50.0%) in 2010-2012 and 300,395 non-smartphone application group (50.0%) in 2013-2015. The proportion of the difficulty in hospital acceptance was 14.2% (42,585/300,131) in the smartphone application group and 10.9% (32,819/300,395) in the non-smartphone application group, and the difficulty in hospital acceptance significantly decreased by the introduction of the smartphone application (adjusted odds ration; 0.730, 95% confidence interval; 0.718-0.741, P<0.001). Conclusions: Sharing information between ambulance and hospital by introducing smartphone application at the scene was associated with decreasing the difficulty in hospital acceptance. Our findings may be considerable useful for developing emergency medical information system with using IT in other areas of the world.

Keywords: difficulty in hospital acceptance, emergency medical service, infomation technology, smartphone application

Procedia PDF Downloads 249
100 Estimating the Ladder Angle and the Camera Position From a 2D Photograph Based on Applications of Projective Geometry and Matrix Analysis

Authors: Inigo Beckett

Abstract:

In forensic investigations, it is often the case that the most potentially useful recorded evidence derives from coincidental imagery, recorded immediately before or during an incident, and that during the incident (e.g. a ‘failure’ or fire event), the evidence is changed or destroyed. To an image analysis expert involved in photogrammetric analysis for Civil or Criminal Proceedings, traditional computer vision methods involving calibrated cameras is often not appropriate because image metadata cannot be relied upon. This paper presents an approach for resolving this problem, considering in particular and by way of a case study, the angle of a simple ladder shown in a photograph. The UK Health and Safety Executive (HSE) guidance document published in 2014 (INDG455) advises that a leaning ladder should be erected at 75 degrees to the horizontal axis. Personal injury cases can arise in the construction industry because a ladder is too steep or too shallow. Ad-hoc photographs of such ladders in their incident position provide a basis for analysis of their angle. This paper presents a direct approach for ascertaining the position of the camera and the angle of the ladder simultaneously from the photograph(s) by way of a workflow that encompasses a novel application of projective geometry and matrix analysis. Mathematical analysis shows that for a given pixel ratio of directly measured collinear points (i.e. features that lie on the same line segment) from the 2D digital photograph with respect to a given viewing point, we can constrain the 3D camera position to a surface of a sphere in the scene. Depending on what we know about the ladder, we can enforce another independent constraint on the possible camera positions which enables us to constrain the possible positions even further. Experiments were conducted using synthetic and real-world data. The synthetic data modeled a vertical plane with a ladder on a horizontally flat plane resting against a vertical wall. The real-world data was captured using an Apple iPhone 13 Pro and 3D laser scan survey data whereby a ladder was placed in a known location and angle to the vertical axis. For each case, we calculated camera positions and the ladder angles using this method and cross-compared them against their respective ‘true’ values.

Keywords: image analysis, projective geometry, homography, photogrammetry, ladders, Forensics, Mathematical modeling, planar geometry, matrix analysis, collinear, cameras, photographs

Procedia PDF Downloads 17
99 Psychosocial Support in Disaster Situations in the Philippines and Indonesia: A Critical Literature Review

Authors: Fuad Hamsyah

Abstract:

Since last two decades, major disasters have happened in the Philippines and Indonesia as two countries that are located in the pacific ring of fire territory. While in Southeast Asian countries, the process of psychosocial support provision is facing various constraints such as limited number of mental health professionals and the limited knowledge about the provision of psychosocial support for disaster survivors. Yet after the tsunami disaster in 2004, many Asian countries begin to develop policies about the provision of psychosocial interventions as an effort for future disasters preparedness. In addition, mental health professionals have to consider the local cultural values and beliefs in order to provide people with effective psychosocial support since cultural values and beliefs play a significant role in the diversity of psychological distress that forms symptoms formation, and people’s way to seek for psychological assistance. This study is a critical literature review on 130 relevant selected documents and literatures. IASC MHPSS guideline is used as the research framework in doing critical analysis. The purpose of this study is to conduct a critical analysis on the mental health and psychosocial support provision in the Philippines and Indonesia with three main objectives: 1) To describe strengths, weaknesses, and challenges in the process of psychosocial supports given by public and private organizations in emergency settings of disaster in the Philippines and Indonesia, 2) To compare psychosocial support practices between the Philippines and Indonesia, and to identify the good practices among these countries, 3) To learn how cultural values influence the implementation of psychosocial supports in emergency settings of disaster. This research indicated that almost every function from IASC MHPSS guidelines has been implemented effectively in the Philippines and Indonesia, yet not in every detail of IASC MHPSS guidelines. Several similarities and differences are indicated in this study also based on the IASC MHPSS guidelines as the analysis framework. Further, both countries have some good practices that can be useful as an example of a comprehensive psychosocial support implementation. Apart from the IASC MHPSS guideline, cultural values and beliefs in the Philippines such as kanya-kanya syndrome, pakikipakapwa, utang na loob, bahala na, pagkaya are indicated as several cultural values that have strong influences towards people’s attitude and behavior in disaster situations. While in Indonesia, several cultural values such as sabar and nrimo become two important attitudes to cope disaster situations.

Keywords: disaster, Indonesia, psychosocial support, Philippines

Procedia PDF Downloads 363
98 Miniaturization of Germanium Photo-Detectors by Using Micro-Disk Resonator

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Kim Dowon, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

Several Germanium photodetectors (PD) built on silicon micro-disks are fabricated on the standard Si photonics multiple project wafers (MPW) and demonstrated to exhibit very low dark current, satisfactory operation bandwidth and moderate responsivity. Among them, a vertical p-i-n Ge PD based on a 2.0 µm-radius micro-disk has a dark current of as low as 35 nA, compared to a conventional PD current of 1 µA with an area of 100 µm2. The operation bandwidth is around 15 GHz at a reverse bias of 1V. The responsivity is about 0.6 A/W. Microdisk is a striking planar structure in integrated optics to enhance light-matter interaction and construct various photonics devices. The disk geometries feature in strongly and circularly confining light into an ultra-small volume in the form of whispering gallery modes. A laser may benefit from a microdisk in which a single mode overlaps the gain materials both spatially and spectrally. Compared to microrings, micro-disk removes the inner boundaries to enable even better compactness, which also makes it very suitable for some scenarios that electrical connections are needed. For example, an ultra-low power (≈ fJ) athermal Si modulator has been demonstrated with a bit rate of 25Gbit/s by confining both photons and electrically-driven carriers into a microscale volume.In this work, we study Si-based PDs with Ge selectively grown on a microdisk with the radius of a few microns. The unique feature of using microdisk for Ge photodetector is that mode selection is not important. In the applications of laser or other passive optical components, microdisk must be designed very carefully to excite the fundamental mode in a microdisk in that essentially the microdisk usually supports many higher order modes in the radial directions. However, for detector applications, this is not an issue because the local light absorption is mode insensitive. Light power carried by all modes are expected to be converted into photo-current. Another benefit of using microdisk is that the power circulation inside avoids any introduction of the reflector. A complete simulation model with all involved materials taken into account is established to study the promise of microdisk structures for photodetector by using finite difference time domain (FDTD) method. By viewing from the current preliminary data, the directions to further improve the device performance are also discussed.

Keywords: integrated optical devices, silicon photonics, micro-resonator, photodetectors

Procedia PDF Downloads 381
97 Development of an Appropriate Method for the Determination of Multiple Mycotoxins in Pork Processing Products by UHPLC-TCFLD

Authors: Jason Gica, Yi-Hsieng Samuel Wu, Deng-Jye Yang, Yi-Chen Chen

Abstract:

Mycotoxins, harmful secondary metabolites produced by certain fungi species, pose significant risks to animals and humans worldwide. Their stable properties lead to contamination during grain harvesting, transportation, and storage, as well as in processed food products. The prevalence of mycotoxin contamination has attracted significant attention due to its adverse impact on food safety and global trade. The secondary contamination pathway from animal products has been identified as an important route of exposure, posing health risks for livestock and humans consuming contaminated products. Pork, one of the highly consumed meat products in Taiwan according to the National Food Consumption Database, plays a critical role in the nation's diet and economy. Given its substantial consumption, pork processing products are a significant component of the food supply chain and a potential source of mycotoxin contamination. This study is paramount for formulating effective regulations and strategies to mitigate mycotoxin-related risks in the food supply chain. By establishing a reliable analytical method, this research contributes to safeguarding public health and enhancing the quality of pork processing products. The findings will serve as valuable guidance for policymakers, food industries, and consumers to ensure a safer food supply chain in the face of emerging mycotoxin challenges. An innovative and efficient analytical approach is proposed using Ultra-High Performance Liquid Chromatography coupled with Temperature Control Fluorescence Detector Light (UHPLC-TCFLD) to determine multiple mycotoxins in pork meat samples due to its exceptional capacity to detect multiple mycotoxins at the lowest levels of concentration, making it highly sensitive and reliable for comprehensive mycotoxin analysis. Additionally, its ability to simultaneously detect multiple mycotoxins in a single run significantly reduces the time and resources required for analysis, making it a cost-effective solution for monitoring mycotoxin contamination in pork processing products. The research aims to optimize the efficient mycotoxin QuEChERs extraction method and rigorously validate its accuracy and precision. The results will provide crucial insights into mycotoxin levels in pork processing products.

Keywords: multiple-mycotoxin analysis, pork processing products, QuEChERs, UHPLC-TCFLD, validation

Procedia PDF Downloads 29
96 High-Pressure Polymorphism of 4,4-Bipyridine Hydrobromide

Authors: Michalina Aniola, Andrzej Katrusiak

Abstract:

4,4-Bipyridine is an important compound often used in chemical practice and more recently frequently applied for designing new metal organic framework (MoFs). Here we present a systematic high-pressure study of its hydrobromide salt. 4,4-Bipyridine hydrobromide monohydrate, 44biPyHBrH₂O, at ambient-pressure is orthorhombic, space group P212121 (phase a). Its hydrostatic compression shows that it is stable to 1.32 GPa at least. However, the recrystallization above 0.55 GPa reveals a new hidden b-phase (monoclinic, P21/c). Moreover, when the 44biPyHBrH2O is heated to high temperature the chemical reactions of this compound in methanol solution can be observed. High-pressure experiments were performed using a Merrill-Bassett diamond-anvil cell (DAC), modified by mounting the anvils directly on the steel supports, and X-ray diffraction measurements were carried out on a KUMA and Excalibur diffractometer equipped with an EOS CCD detector. At elevated pressure, the crystal of 44biPyHBrH₂O exhibits several striking and unexpected features. No signs of instability of phase a were detected to 1.32 GPa, while phase b becomes stable at above 0.55 GPa, as evidenced by its recrystallizations. Phases a and b of 44biPyHBrH2O are partly isostructural: their unit-cell dimensions and the arrangement of ions and water molecules are similar. In phase b the HOH-Br- chains double the frequency of their zigzag motifs, compared to phase a, and the 44biPyH+ cations change their conformation. Like in all monosalts of 44biPy determined so far, in phase a the pyridine rings are twisted by about 30 degrees about bond C4-C4 and in phase b they assume energy-unfavorable planar conformation. Another unusual feature of 44biPyHBrH2O is that all unit-cell parameters become longer on the transition from phase a to phase b. Thus the volume drop on the transition to high-pressure phase b totally depends on the shear strain of the lattice. Higher temperature triggers chemical reactions of 44biPyHBrH2O with methanol. When the saturated methanol solution compound precipitated at 0.1 GPa and temperature of 423 K was required to dissolve all the sample, the subsequent slow recrystallization at isochoric conditions resulted in disalt 4,4-bipyridinium dibromide. For the 44biPyHBrH2O sample sealed in the DAC at 0.35 GPa, then dissolved at isochoric conditions at 473 K and recrystallized by slow controlled cooling, a reaction of N,N-dimethylation took place. It is characteristic that in both high-pressure reactions of 44biPyHBrH₂O the unsolvated disalt products were formed and that free base 44biPy and H₂O remained in the solution. The observed reactions indicate that high pressure destabilized ambient-pressure salts and favors new products. Further studies on pressure-induced reactions are carried out in order to better understand the structural preferences induced by pressure.

Keywords: conformation, high-pressure, negative area compressibility, polymorphism

Procedia PDF Downloads 216
95 Sequential and Combinatorial Pre-Treatment Strategy of Lignocellulose for the Enhanced Enzymatic Hydrolysis of Spent Coffee Waste

Authors: Rajeev Ravindran, Amit K. Jaiswal

Abstract:

Waste from the food-processing industry is produced in large amount and contains high levels of lignocellulose. Due to continuous accumulation throughout the year in large quantities, it creates a major environmental problem worldwide. The chemical composition of these wastes (up to 75% of its composition is contributed by polysaccharide) makes it inexpensive raw material for the production of value-added products such as biofuel, bio-solvents, nanocrystalline cellulose and enzymes. In order to use lignocellulose as the raw material for the microbial fermentation, the substrate is subjected to enzymatic treatment, which leads to the release of reducing sugars such as glucose and xylose. However, the inherent properties of lignocellulose such as presence of lignin, pectin, acetyl groups and the presence of crystalline cellulose contribute to recalcitrance. This leads to poor sugar yields upon enzymatic hydrolysis of lignocellulose. A pre-treatment method is generally applied before enzymatic treatment of lignocellulose that essentially removes recalcitrant components in biomass through structural breakdown. Present study is carried out to find out the best pre-treatment method for the maximum liberation of reducing sugars from spent coffee waste (SPW). SPW was subjected to a range of physical, chemical and physico-chemical pre-treatment followed by a sequential, combinatorial pre-treatment strategy is also applied on to attain maximum sugar yield by combining two or more pre-treatments. All the pre-treated samples were analysed for total reducing sugar followed by identification and quantification of individual sugar by HPLC coupled with RI detector. Besides, generation of any inhibitory compounds such furfural, hydroxymethyl furfural (HMF) which can hinder microbial growth and enzyme activity is also monitored. Results showed that ultrasound treatment (31.06 mg/L) proved to be the best pre-treatment method based on total reducing content followed by dilute acid hydrolysis (10.03 mg/L) while galactose was found to be the major monosaccharide present in the pre-treated SPW. Finally, the results obtained from the study were used to design a sequential lignocellulose pre-treatment protocol to decrease the formation of enzyme inhibitors and increase sugar yield on enzymatic hydrolysis by employing cellulase-hemicellulase consortium. Sequential, combinatorial treatment was found better in terms of total reducing yield and low content of the inhibitory compounds formation, which could be due to the fact that this mode of pre-treatment combines several mild treatment methods rather than formulating a single one. It eliminates the need for a detoxification step and potential application in the valorisation of lignocellulosic food waste.

Keywords: lignocellulose, enzymatic hydrolysis, pre-treatment, ultrasound

Procedia PDF Downloads 342
94 Monitoring Memories by Using Brain Imaging

Authors: Deniz Erçelen, Özlem Selcuk Bozkurt

Abstract:

The course of daily human life calls for the need for memories and remembering the time and place for certain events. Recalling memories takes up a substantial amount of time for an individual. Unfortunately, scientists lack the proper technology to fully understand and observe different brain regions that interact to form or retrieve memories. The hippocampus, a complex brain structure located in the temporal lobe, plays a crucial role in memory. The hippocampus forms memories as well as allows the brain to retrieve them by ensuring that neurons fire together. This process is called “neural synchronization.” Sadly, the hippocampus is known to deteriorate often with age. Proteins and hormones, which repair and protect cells in the brain, typically decline as the age of an individual increase. With the deterioration of the hippocampus, an individual becomes more prone to memory loss. Many memory loss starts off as mild but may evolve into serious medical conditions such as dementia and Alzheimer’s disease. In their quest to fully comprehend how memories work, scientists have created many different kinds of technology that are used to examine the brain and neural pathways. For instance, Magnetic Resonance Imaging - or MRI- is used to collect detailed images of an individual's brain anatomy. In order to monitor and analyze brain functions, a different version of this machine called Functional Magnetic Resonance Imaging - or fMRI- is used. The fMRI is a neuroimaging procedure that is conducted when the target brain regions are active. It measures brain activity by detecting changes in blood flow associated with neural activity. Neurons need more oxygen when they are active. The fMRI measures the change in magnetization between blood which is oxygen-rich and oxygen-poor. This way, there is a detectable difference across brain regions, and scientists can monitor them. Electroencephalography - or EEG - is also a significant way to monitor the human brain. The EEG is more versatile and cost-efficient than an fMRI. An EEG measures electrical activity which has been generated by the numerous cortical layers of the brain. EEG allows scientists to be able to record brain processes that occur after external stimuli. EEGs have a very high temporal resolution. This quality makes it possible to measure synchronized neural activity and almost precisely track the contents of short-term memory. Science has come a long way in monitoring memories using these kinds of devices, which have resulted in the inspections of neurons and neural pathways becoming more intense and detailed.

Keywords: brain, EEG, fMRI, hippocampus, memories, neural pathways, neurons

Procedia PDF Downloads 48
93 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.

Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement

Procedia PDF Downloads 92
92 Analytical Tools for Multi-Residue Analysis of Some Oxygenated Metabolites of PAHs (Hydroxylated, Quinones) in Sediments

Authors: I. Berger, N. Machour, F. Portet-Koltalo

Abstract:

Polycyclic aromatic hydrocarbons (PAHs) are toxic and carcinogenic pollutants produced in majority by incomplete combustion processes in industrialized and urbanized areas. After being emitted in atmosphere, these persistent contaminants are deposited to soils or sediments. Even if persistent, some can be partially degraded (photodegradation, biodegradation, chemical oxidation) and they lead to oxygenated metabolites (oxy-PAHs) which can be more toxic than their parent PAH. Oxy-PAHs are less measured than PAHs in sediments and this study aims to compare different analytical tools in order to extract and quantify a mixture of four hydroxylated PAHs (OH-PAHs) and four carbonyl PAHs (quinones) in sediments. Methodologies: Two analytical systems – HPLC with on-line UV and fluorescence detectors (HPLC-UV-FLD) and GC coupled to a mass spectrometer (GC-MS) – were compared to separate and quantify oxy-PAHs. Microwave assisted extraction (MAE) was optimized to extract oxy-PAHs from sediments. Results: First OH-PAHs and quinones were analyzed in HPLC with on-line UV and fluorimetric detectors. OH-PAHs were detected with the sensitive FLD, but the non-fluorescent quinones were detected with UV. The limits of detection (LOD)s obtained were in the range (2-3)×10-4 mg/L for OH-PAHs and (2-3)×10-3 mg/L for quinones. Second, even if GC-MS is not well adapted to the analysis of the thermodegradable OH-PAHs and quinones without any derivatization step, it was used because of the advantages of the detector in terms of identification and of GC in terms of efficiency. Without derivatization, only two of the four quinones were detected in the range 1-10 mg/L (LODs=0.3-1.2 mg/L) and LODs were neither very satisfying for the four OH-PAHs (0.18-0.6 mg/L). So two derivatization processes were optimized, comparing to literature: one for silylation of OH-PAHs, one for acetylation of quinones. Silylation using BSTFA/TCMS 99/1 was enhanced using a mixture of catalyst solvents (pyridine/ethyle acetate) and finding the appropriate reaction duration (5-60 minutes). Acetylation was optimized at different steps of the process, including the initial volume of compounds to derivatize, the added amounts of Zn (0.1-0.25 g), the nature of the derivatization product (acetic anhydride, heptafluorobutyric acid…) and the liquid/liquid extraction at the end of the process. After derivatization, LODs were decreased by a factor 3 for OH-PAHs and by a factor 4 for quinones, all the quinones being now detected. Thereafter, quinones and OH-PAHs were extracted from spiked sediments using microwave assisted extraction (MAE) followed by GC-MS analysis. Several mixtures of solvents of different volumes (10-25 mL) and using different extraction temperatures (80-120°C) were tested to obtain the best recovery yields. Satisfactory recoveries could be obtained for quinones (70-96%) and for OH-PAHs (70-104%). Temperature was a critical factor which had to be controlled to avoid oxy-PAHs degradation during the MAE extraction process. Conclusion: Even if MAE-GC-MS was satisfactory to analyze these oxy-PAHs, MAE optimization has to be carried on to obtain a most appropriate extraction solvent mixture, allowing a direct injection in the HPLC-UV-FLD system, which is more sensitive than GC-MS and does not necessitate a previous long derivatization step.

Keywords: derivatizations for GC-MS, microwave assisted extraction, on-line HPLC-UV-FLD, oxygenated PAHs, polluted sediments

Procedia PDF Downloads 259
91 Exploring Coexisting Opportunity of Earthquake Risk and Urban Growth

Authors: Chang Hsueh-Sheng, Chen Tzu-Ling

Abstract:

Earthquake is an unpredictable natural disaster and intensive earthquakes have caused serious impacts on social-economic system, environmental and social resilience, and further increase vulnerability. Due to earthquakes do not kill people, buildings do. When buildings located nearby earthquake-prone areas and constructed upon poorer soil areas might result in earthquake-induced ground damage. In addition, many existing buildings built before any improved seismic provisions began to be required in building codes and inappropriate land usage with highly dense population might result in much serious earthquake disaster. Indeed, not only do earthquake disaster impact seriously on urban environment, but urban growth might increase the vulnerability. Since 1980s, ‘Cutting down risks and vulnerability’ has been brought up in both urban planning and architecture and such concept has way beyond retrofitting of seismic damages, seismic resistance, and better anti-seismic structures, and become the key action on disaster mitigation. Land use planning and zoning are two critical non-structural measures on controlling physical development while it is difficult for zoning boards and governing bodies restrict development of questionable lands to uses compatible with the hazard without credible earthquake loss projection. Therefore, identifying potential earthquake exposure, vulnerability people and places, and urban development areas might become strongly supported information for decision makers. Taiwan locates on the Pacific Ring of Fire where a seismically active zone is. Some of the active faults have been found close by densely populated and highly developed built environment in the cities. Therefore, this study attempts to base on the perspective of carrying capacity and draft out micro-zonation according to both vulnerability index and urban growth index while considering spatial variances of multi factors via geographical weighted principle components (GWPCA). The purpose in this study is to construct supported information for decision makers on revising existing zoning in high-risk areas for a more compatible use and the public on managing risks.

Keywords: earthquake disaster, vulnerability, urban growth, carrying capacity, /geographical weighted principle components (GWPCA), bivariate spatial association statistic

Procedia PDF Downloads 227
90 Improving the Dielectric Strength of Transformer Oil for High Health Index: An FEM Based Approach Using Nanofluids

Authors: Fatima Khurshid, Noor Ul Ain, Syed Abdul Rehman Kashif, Zainab Riaz, Abdullah Usman Khan, Muhammad Imran

Abstract:

As the world is moving towards extra-high voltage (EHV) and ultra-high voltage (UHV) power systems, the performance requirements of power transformers are becoming crucial to the system reliability and security. With the transformers being an essential component of a power system, low health index of transformers poses greater risks for safe and reliable operation. Therefore, to meet the rising demands of the power system and transformer performance, researchers are being prompted to provide solutions for enhanced thermal and electrical properties of transformers. This paper proposes an approach to improve the health index of a transformer by using nano-technology in conjunction with bio-degradable oils. Vegetable oils can serve as potential dielectric fluid alternatives to the conventional mineral oils, owing to their numerous inherent benefits; namely, higher fire and flashpoints, and being environment-friendly in nature. Moreover, the addition of nanoparticles in the dielectric fluid further serves to improve the dielectric strength of the insulation medium. In this research, using the finite element method (FEM) in COMSOL Multiphysics environment, and a 2D space dimension, three different oil samples have been modelled, and the electric field distribution is computed for each sample at various electric potentials, i.e., 90 kV, 100 kV, 150 kV, and 200 kV. Furthermore, each sample has been modified with the addition of nanoparticles of different radii (50 nm and 100 nm) and at different interparticle distance (5 mm and 10 mm), considering an instant of time. The nanoparticles used are non-conductive and have been modelled as alumina (Al₂O₃). The geometry has been modelled according to IEC standard 60897, with a standard electrode gap distance of 25 mm. For an input supply voltage of 100 kV, the maximum electric field stresses obtained for the samples of synthetic vegetable oil, olive oil, and mineral oil are 5.08 ×10⁶ V/m, 5.11×10⁶ V/m and 5.62×10⁶ V/m, respectively. It is observed that for the unmodified samples, vegetable oils have a greater dielectric strength as compared to the conventionally used mineral oils because of their higher flash points and higher values of relative permittivity. Also, for the modified samples, the addition of nanoparticles inhibits the streamer propagation inside the dielectric medium and hence, serves to improve the dielectric properties of the medium.

Keywords: dielectric strength, finite element method, health index, nanotechnology, streamer propagation

Procedia PDF Downloads 118
89 The Impact of HKUST-1 Metal-Organic Framework Pretreatment on Dynamic Acetaldehyde Adsorption

Authors: M. François, L. Sigot, C. Vallières

Abstract:

Volatile Organic Compounds (VOCs) are a real health issue, particularly in domestic indoor environments. Among these VOCs, acetaldehyde is frequently monitored in dwellings ‘air, especially due to smoking and spontaneous emissions from the new wall and soil coverings. It is responsible for respiratory complaints and is classified as possibly carcinogenic to humans. Adsorption processes are commonly used to remove VOCs from the air. Metal-Organic Frameworks (MOFs) are a promising type of material for high adsorption performance. These hybrid porous materials composed of metal inorganic clusters and organic ligands are interesting thanks to their high porosity and surface area. The HKUST-1 (also referred to as MOF-199) is a copper-based MOF with the formula [Cu₃(BTC)₂(H₂O)₃]n (BTC = benzene-1,3,5-tricarboxylate) and exhibits unsaturated metal sites that can be attractive sites for adsorption. The objective of this study is to investigate the impact of HKUST-1 pretreatment on acetaldehyde adsorption. Thus, dynamic adsorption experiments were conducted in 1 cm diameter glass column packed with 2 cm MOF bed height. MOF were sieved to 630 µm - 1 mm. The feed gas (Co = 460 ppmv ± 5 ppmv) was obtained by diluting a 1000 ppmv acetaldehyde gas cylinder in air. The gas flow rate was set to 0.7 L/min (to guarantee a suitable linear velocity). Acetaldehyde concentration was monitored online by gas chromatography coupled with a flame ionization detector (GC-FID). Breakthrough curves must allow to understand the interactions between the MOF and the pollutant as well as the impact of the HKUST-1 humidity in the adsorption process. Consequently, different MOF water content conditions were tested, from a dry material with 7 % water content (dark blue color) to water saturated state with approximately 35 % water content (turquoise color). The rough material – without any pretreatment – containing 30 % water serves as a reference. First, conclusions can be drawn from the comparison of the evolution of the ratio of the column outlet concentration (C) on the inlet concentration (Co) as a function of time for different HKUST-1 pretreatments. The shape of the breakthrough curves is significantly different. The saturation of the rough material is slower (20 h to reach saturation) than that of the dried material (2 h). However, the breakthrough time defined for C/Co = 10 % appears earlier in the case of the rough material (0.75 h) compared to the dried HKUST-1 (1.4 h). Another notable difference is the shape of the curve before the breakthrough at 10 %. An abrupt increase of the outlet concentration is observed for the material with the lower humidity in comparison to a smooth increase for the rough material. Thus, the water content plays a significant role on the breakthrough kinetics. This study aims to understand what can explain the shape of the breakthrough curves associated to the pretreatments of HKUST-1 and which mechanisms take place in the adsorption process between the MOF, the pollutant, and the water.

Keywords: acetaldehyde, dynamic adsorption, HKUST-1, pretreatment influence

Procedia PDF Downloads 208
88 The Impact of Land Use Ex-Concession to the Environment in Dharmasraya District, West Sumatra Province, Indonesia

Authors: Yurike, Yonariza, Rudi Febriamansyah, Syafruddin Karimi

Abstract:

Forest is a natural resource that has an important function as a supporting element of human life. Forest degradation enormous impact on global warming is a reality we have experienced together, that disruption of ecosystems, extreme weather conditions, disruption of water management system watersheds and the threat of natural disasters as floods, landslides and droughts, even disruption food security. Dharmasraya is a district in the province of West Sumatra, which has an area of 92.150 ha of forest, which is largely a former production forest concessions (Forest Management Rights) which is supposed to be a secondary forest. This study answers about the impact of land use in the former concession area Dharmasraya on the environment. The methodology used is the household survey, key informants, and satellite data / GIS. From the results of the study, the former concession area in Dharmasraya experienced a reduction of forest cover over time significantly. Forest concessions should be secondary forests in Dharmasraya, now turned conversion to oil palm plantations. Population pressures and growing economic pressures, resulting in more intensive harvesting. As a result of these forest disturbances caused changes in forest functions. These changes put more emphasis towards economic function by ignoring social functions or ecological function. Society prefers to maximize their benefits today and pay less attention to the protection of natural resources. This causes global warming is increasing and this is not only felt by people around Dharmasraya but also the world. Land clearing by the community through a process in slash and burn. This fire was observed by NOAA satellites and recorded by the Forest Service of West Sumatra. This demonstrates the ability of trees felled trees to absorb carbon dioxide (CO2) to be lost, even with forest fires accounted for carbon dioxide emitted into the air, and this has an impact on global warming. In addition to the change of control of land into oil palm plantations water service has been poor, people began to trouble the water and oil palm plantations are located in the watershed caused the river dried up. Through the findings of this study is expected to contribute ideas to the policy makers to pay more attention to the former concession forest management as the prevention or reduction of global warming.

Keywords: climate change, community, concession forests, environment

Procedia PDF Downloads 299