Search results for: noise hazard
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1755

Search results for: noise hazard

285 Estimating Affected Croplands and Potential Crop Yield Loss of an Individual Farmer Due to Floods

Authors: Shima Nabinejad, Holger Schüttrumpf

Abstract:

Farmers who are living in flood-prone areas such as coasts are exposed to storm surges increased due to climate change. Crop cultivation is the most important economic activity of farmers, and in the time of flooding, agricultural lands are subject to inundation. Additionally, overflow saline water causes more severe damage outcomes than riverine flooding. Agricultural crops are more vulnerable to salinity than other land uses for which the economic damages may continue for a number of years even after flooding and affect farmers’ decision-making for the following year. Therefore, it is essential to assess what extent the agricultural areas are flooded and how much the associated flood damage to each individual farmer is. To address these questions, we integrated farmers’ decision-making at farm-scale with flood risk management. The integrated model includes identification of hazard scenarios, failure analysis of structural measures, derivation of hydraulic parameters for the inundated areas and analysis of the economic damages experienced by each farmer. The present study has two aims; firstly, it attempts to investigate the flooded cropland and potential crop damages for the whole area. Secondly, it compares them among farmers’ field for three flood scenarios, which differ in breach locations of the flood protection structure. To achieve its goal, the spatial distribution of fields and cultivated crops of farmers were fed into the flood risk model, and a 100-year storm surge hydrograph was selected as the flood event. The study area was Pellworm Island that is located in the German Wadden Sea National Park and surrounded by North Sea. Due to high salt content in seawater of North Sea, crops cultivated in the agricultural areas of Pellworm Island are 100% destroyed by storm surges which were taken into account in developing of depth-damage curve for analysis of consequences. As a result, inundated croplands and economic damages to crops were estimated in the whole Island which was further compared for six selected farmers under three flood scenarios. The results demonstrate the significance and the flexibility of the proposed model in flood risk assessment of flood-prone areas by integrating flood risk management and decision-making.

Keywords: crop damages, flood risk analysis, individual farmer, inundated cropland, Pellworm Island, storm surges

Procedia PDF Downloads 255
284 Detection of Acrylamide Using Liquid Chromatography-Tandem Mass Spectrometry and Quantitative Risk Assessment in Selected Food from Saudi Market

Authors: Sarah A. Alotaibi, Mohammed A. Almutairi, Abdullah A. Alsayari, Adibah M. Almutairi, Somaiah K. Almubayedh

Abstract:

Concerns over the presence of acrylamide in food date back to 2002, when Swedish scientists stated that, in carbohydrate-rich foods, amounts of acrylamide were formed when cooked at high temperatures. Similar findings were reported by other researchers which, consequently, caused major international efforts to investigate dietary exposure and the subsequent health complications in order to properly manage this issue. Due to this issue, in this work, we aim to determine the acrylamide level in different foods (coffee, potato chips, biscuits, and baby food) commonly consumed by the Saudi population. In a total of forty-three samples, acrylamide was detected in twenty-three samples at levels of 12.3 to 2850 µg/kg. In reference to the food groups, the highest concentration of acrylamide was found in coffee samples (<12.3-2850 μg/kg), followed by potato chips (655-1310 μg/kg), then biscuits (23.5-449 μg/kg), whereas the lowest acrylamide level was observed in baby food (<14.75 – 126 μg/kg). Most coffee, biscuits and potato chips products contain high amount of acrylamide content and also the most commonly consumed product. Saudi adults had a mean exposure of acrylamide for coffee, potato, biscuit, and cereal (0.07439, 0.04794, 0.01125, 0.003371 µg/kg-b.w/day), respectively. On the other hand, exposure to acrylamide in Saudi infants and children to the same types of food was (0.1701, 0.1096, 0.02572, 0.00771 µg/kg-b.w/day), respectively. Most groups have a percentile that exceeds the tolerable daily intake (TDI) cancer value (2.6 µg/kg-b.w/day). Overall, the MOE results show that the Saudi population is at high risk of acrylamide-related disease in all food types, and there is a chance of cancer risk in all age groups (all values ˂10,000). Furthermore, it was found that in non-cancer risks, the acrylamide in all tested foods was within the safe limit (˃125), except for potato chips, in which there is a risk for diseases in the population. With potato and coffee as raw materials, additional studies were conducted to assess different factors, including temperature, cocking time, and additives affecting the acrylamide formation in fried potato and roasted coffee, by systematically varying processing temperatures and time values, a mitigation of acrylamide content was achieved when lowering the temperature and decreasing the cooking time. Furthermore, it was shown that the combination of the addition of chitosan and NaCl had a large impact on the formation.

Keywords: risk assessment, dietary exposure, MOA, acrylamide, hazard

Procedia PDF Downloads 54
283 Similar Script Character Recognition on Kannada and Telugu

Authors: Gurukiran Veerapur, Nytik Birudavolu, Seetharam U. N., Chandravva Hebbi, R. Praneeth Reddy

Abstract:

This work presents a robust approach for the recognition of characters in Telugu and Kannada, two South Indian scripts with structural similarities in characters. To recognize the characters exhaustive datasets are required, but there are only a few publicly available datasets. As a result, we decided to create a dataset for one language (source language),train the model with it, and then test it with the target language.Telugu is the target language in this work, whereas Kannada is the source language. The suggested method makes use of Canny edge features to increase character identification accuracy on pictures with noise and different lighting. A dataset of 45,150 images containing printed Kannada characters was created. The Nudi software was used to automatically generate printed Kannada characters with different writing styles and variations. Manual labelling was employed to ensure the accuracy of the character labels. The deep learning models like CNN (Convolutional Neural Network) and Visual Attention neural network (VAN) are used to experiment with the dataset. A Visual Attention neural network (VAN) architecture was adopted, incorporating additional channels for Canny edge features as the results obtained were good with this approach. The model's accuracy on the combined Telugu and Kannada test dataset was an outstanding 97.3%. Performance was better with Canny edge characteristics applied than with a model that solely used the original grayscale images. The accuracy of the model was found to be 80.11% for Telugu characters and 98.01% for Kannada words when it was tested with these languages. This model, which makes use of cutting-edge machine learning techniques, shows excellent accuracy when identifying and categorizing characters from these scripts.

Keywords: base characters, modifiers, guninthalu, aksharas, vattakshara, VAN

Procedia PDF Downloads 49
282 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)

Authors: Azimollah Aleshzadeh, Enver Vural Yavuz

Abstract:

The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.

Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping

Procedia PDF Downloads 129
281 A Deep Learning Model with Greedy Layer-Wise Pretraining Approach for Optimal Syngas Production by Dry Reforming of Methane

Authors: Maryam Zarabian, Hector Guzman, Pedro Pereira-Almao, Abraham Fapojuwo

Abstract:

Dry reforming of methane (DRM) has sparked significant industrial and scientific interest not only as a viable alternative for addressing the environmental concerns of two main contributors of the greenhouse effect, i.e., carbon dioxide (CO₂) and methane (CH₄), but also produces syngas, i.e., a mixture of hydrogen (H₂) and carbon monoxide (CO) utilized by a wide range of downstream processes as a feedstock for other chemical productions. In this study, we develop an AI-enable syngas production model to tackle the problem of achieving an equivalent H₂/CO ratio [1:1] with respect to the most efficient conversion. Firstly, the unsupervised density-based spatial clustering of applications with noise (DBSAN) algorithm removes outlier data points from the original experimental dataset. Then, random forest (RF) and deep neural network (DNN) models employ the error-free dataset to predict the DRM results. DNN models inherently would not be able to obtain accurate predictions without a huge dataset. To cope with this limitation, we employ reusing pre-trained layers’ approaches such as transfer learning and greedy layer-wise pretraining. Compared to the other deep models (i.e., pure deep model and transferred deep model), the greedy layer-wise pre-trained deep model provides the most accurate prediction as well as similar accuracy to the RF model with R² values 1.00, 0.999, 0.999, 0.999, 0.999, and 0.999 for the total outlet flow, H₂/CO ratio, H₂ yield, CO yield, CH₄ conversion, and CO₂ conversion outputs, respectively.

Keywords: artificial intelligence, dry reforming of methane, artificial neural network, deep learning, machine learning, transfer learning, greedy layer-wise pretraining

Procedia PDF Downloads 82
280 Floods Hazards and Emergency Respond in Negara Brunei Darussalam

Authors: Hj Mohd Sidek bin Hj Mohd Yusof

Abstract:

More than 1.5 billion people around the world are adversely affected by floods. Floods account for about a third of all natural catastrophes, cause more than half of all fatalities and are responsible for a third of overall economic loss around the world. Giving advanced warning of impending disasters can reduce or even avoid the number of deaths, social and economic hardships that are so commonly reported after the event. Integrated catchment management recognizes that it is not practical or viable to provide structural measures that will keep floodwater away from the community and their property. Non-structural measures are therefore required to assist the community to cope when flooding occurs which exceeds the capacity of the structural measures. Non-structural measures may need to be used to influence the way land is used or buildings are constructed, or they may be used to improve the community’s preparedness and response to flooding. The development and implementation of non-structural measures may be guided and encouraged by policy and legislation, or through voluntary action by the community based on knowledge gained from public education programs. There is a range of non-structural measures that can be used for flood hazard mitigation which can be the use measures includes policies and rules applied by government to regulate the kinds of activities that are carried out in various flood-prone areas, including minimum floor levels and the type of development approved. Voluntary actions taken by the authorities and by the community living and working on the flood plain to lessen flooding effects on themselves and their properties including monitoring land use changes, monitoring and investigating the effects of bush / forest clearing in the catchment and providing relevant flood related information to the community. Response modification measures may include: flood warning system, flood education, community awareness and readiness, evacuation arrangements and recovery plan. A Civil Defense Emergency Management needs to be established for Brunei Darussalam in order to plan, co-ordinate and undertake flood emergency management. This responsibility may be taken by the Ministry of Home Affairs, Brunei Darussalam who is already responsible for Fire Fighting and Rescue services. Several pieces of legislation and planning instruments are in place to assist flood management, particularly: flood warning system, flood education Community awareness and readiness, evacuation arrangements and recovery plan.

Keywords: RTB, radio television brunei, DDMC, district disaster management center, FIR, flood incidence report, PWD, public works department

Procedia PDF Downloads 254
279 Participatory Cartography for Disaster Reduction in Pogreso, Yucatan Mexico

Authors: Gustavo Cruz-Bello

Abstract:

Progreso is a coastal community in Yucatan, Mexico, highly exposed to floods produced by severe storms and tropical cyclones. A participatory cartography approach was conducted to help to reduce floods disasters and assess social vulnerability within the community. The first step was to engage local authorities in risk management to facilitate the process. Two workshop were conducted, in the first, a poster size printed high spatial resolution satellite image of the town was used to gather information from the participants: eight women and seven men, among them construction workers, students, government employees and fishermen, their ages ranged between 23 and 58 years old. For the first task, participants were asked to locate emblematic places and place them in the image to familiarize with it. Then, they were asked to locate areas that get flooded, the buildings that they use as refuges, and to list actions that they usually take to reduce vulnerability, as well as to collectively come up with others that might reduce disasters. The spatial information generated at the workshops was digitized and integrated into a GIS environment. A printed version of the map was reviewed by local risk management experts, who validated feasibility of proposed actions. For the second workshop, we retrieved the information back to the community for feedback. Additionally a survey was applied in one household per block in the community to obtain socioeconomic, prevention and adaptation data. The information generated from the workshops was contrasted, through T and Chi Squared tests, with the survey data in order to probe the hypothesis that poorer or less educated people, are less prepared to face floods (more vulnerable) and live near or among higher presence of floods. Results showed that a great majority of people in the community are aware of the hazard and are prepared to face it. However, there was not a consistent relationship between regularly flooded areas with people’s average years of education, house services, or house modifications against heavy rains to be prepared to hazards. We could say that the participatory cartography intervention made participants aware of their vulnerability and made them collectively reflect about actions that can reduce disasters produced by floods. They also considered that the final map could be used as a communication and negotiation instrument with NGO and government authorities. It was not found that poorer and less educated people are located in areas with higher presence of floods.

Keywords: climate change, floods, Mexico, participatory mapping, social vulnerability

Procedia PDF Downloads 112
278 Recurrent Neural Networks for Complex Survival Models

Authors: Pius Marthin, Nihal Ata Tutkun

Abstract:

Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.

Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)

Procedia PDF Downloads 85
277 Non-Destructive Static Damage Detection of Structures Using Genetic Algorithm

Authors: Amir Abbas Fatemi, Zahra Tabrizian, Kabir Sadeghi

Abstract:

To find the location and severity of damage that occurs in a structure, characteristics changes in dynamic and static can be used. The non-destructive techniques are more common, economic, and reliable to detect the global or local damages in structures. This paper presents a non-destructive method in structural damage detection and assessment using GA and static data. Thus, a set of static forces is applied to some of degrees of freedom and the static responses (displacements) are measured at another set of DOFs. An analytical model of the truss structure is developed based on the available specification and the properties derived from static data. The damages in structure produce changes to its stiffness so this method used to determine damage based on change in the structural stiffness parameter. Changes in the static response which structural damage caused choose to produce some simultaneous equations. Genetic Algorithms are powerful tools for solving large optimization problems. Optimization is considered to minimize objective function involve difference between the static load vector of damaged and healthy structure. Several scenarios defined for damage detection (single scenario and multiple scenarios). The static damage identification methods have many advantages, but some difficulties still exist. So it is important to achieve the best damage identification and if the best result is obtained it means that the method is Reliable. This strategy is applied to a plane truss. This method is used for a plane truss. Numerical results demonstrate the ability of this method in detecting damage in given structures. Also figures show damage detections in multiple damage scenarios have really efficient answer. Even existence of noise in the measurements doesn’t reduce the accuracy of damage detections method in these structures.

Keywords: damage detection, finite element method, static data, non-destructive, genetic algorithm

Procedia PDF Downloads 230
276 Geospatial Curve Fitting Methods for Disease Mapping of Tuberculosis in Eastern Cape Province, South Africa

Authors: Davies Obaromi, Qin Yongsong, James Ndege

Abstract:

To interpolate scattered or regularly distributed data, there are imprecise or exact methods. However, there are some of these methods that could be used for interpolating data in a regular grid and others in an irregular grid. In spatial epidemiology, it is important to examine how a disease prevalence rates are distributed in space, and how they relate with each other within a defined distance and direction. In this study, for the geographic and graphic representation of the disease prevalence, linear and biharmonic spline methods were implemented in MATLAB, and used to identify, localize and compare for smoothing in the distribution patterns of tuberculosis (TB) in Eastern Cape Province. The aim of this study is to produce a more “smooth” graphical disease map for TB prevalence patterns by a 3-D curve fitting techniques, especially the biharmonic splines that can suppress noise easily, by seeking a least-squares fit rather than exact interpolation. The datasets are represented generally as a 3D or XYZ triplets, where X and Y are the spatial coordinates and Z is the variable of interest and in this case, TB counts in the province. This smoothing spline is a method of fitting a smooth curve to a set of noisy observations using a spline function, and it has also become the conventional method for its high precision, simplicity and flexibility. Surface and contour plots are produced for the TB prevalence at the provincial level for 2012 – 2015. From the results, the general outlook of all the fittings showed a systematic pattern in the distribution of TB cases in the province and this is consistent with some spatial statistical analyses carried out in the province. This new method is rarely used in disease mapping applications, but it has a superior advantage to be assessed at subjective locations rather than only on a rectangular grid as seen in most traditional GIS methods of geospatial analyses.

Keywords: linear, biharmonic splines, tuberculosis, South Africa

Procedia PDF Downloads 237
275 Evaluation of Mechanical Behavior of Laser Cladding in Various Tilting Pad Bearing Materials

Authors: Si-Geun Choi, Hoon-Jae Park, Jung-Woo Cho, Jin-Ho Lim, Jin-Young Park, Joo-Young Oh, Jae-Il Jeong Seock-Sam Kim, Young Tae Cho, Chan Gyu Kim, Jong-Hyoung Kim

Abstract:

The tilting pad bearing is a kind of the fluid film bearing and it can contribute to the high speed and the high load performance compared to other bearings including the rolling element bearing. Furthermore, the tilting bearing has many advantages such as high stability at high-speed performance, long life, high damping, high impact resistance and low noise. Therefore, it mostly used in mid to large size turbomachines, despite the high price disadvantage. Recently, manufacture and process employing laser techniques advancing at a fast-growing rate in mechanical industry, the dissimilar metal weld process employing laser techniques is actively studied. Moreover, also, Industry fields try to apply for welding the white metal and the back metal using laser cladding method for high durability. Furthermore, it has followed that laser cladding method has a lot better bond strength, toughness, anti-abrasion and environment-friendly than centrifugal casting method through preceding research. Therefore, the laser cladding method has a lot better quality, cost reduction, eco-friendliness and permanence of technology than the centrifugal casting method or the gravity casting method. In this study, we compare the mechanical properties of different bearing materials by evaluating the behavior of laser cladding layer with various materials (i.e. SS400, SCM440, S20C) under the same parameters. Furthermore, we analyze the porosity of various tilting pad bearing materials which white metal treated on samples. SEM, EDS analysis and hardness tests of three materials are shown to understand the mechanical properties and tribological behavior. W/D ratio, surface roughness results with various materials are performed in this study.

Keywords: laser cladding, tilting pad bearing, white metal, mechanical properties

Procedia PDF Downloads 377
274 Analysis of Real Time Seismic Signal Dataset Using Machine Learning

Authors: Sujata Kulkarni, Udhav Bhosle, Vijaykumar T.

Abstract:

Due to the closeness between seismic signals and non-seismic signals, it is vital to detect earthquakes using conventional methods. In order to distinguish between seismic events and non-seismic events depending on their amplitude, our study processes the data that come from seismic sensors. The authors suggest a robust noise suppression technique that makes use of a bandpass filter, an IIR Wiener filter, recursive short-term average/long-term average (STA/LTA), and Carl short-term average (STA)/long-term average for event identification (LTA). The trigger ratio used in the proposed study to differentiate between seismic and non-seismic activity is determined. The proposed work focuses on significant feature extraction for machine learning-based seismic event detection. This serves as motivation for compiling a dataset of all features for the identification and forecasting of seismic signals. We place a focus on feature vector dimension reduction techniques due to the temporal complexity. The proposed notable features were experimentally tested using a machine learning model, and the results on unseen data are optimal. Finally, a presentation using a hybrid dataset (captured by different sensors) demonstrates how this model may also be employed in a real-time setting while lowering false alarm rates. The planned study is based on the examination of seismic signals obtained from both individual sensors and sensor networks (SN). A wideband seismic signal from BSVK and CUKG station sensors, respectively located near Basavakalyan, Karnataka, and the Central University of Karnataka, makes up the experimental dataset.

Keywords: Carl STA/LTA, features extraction, real time, dataset, machine learning, seismic detection

Procedia PDF Downloads 119
273 Spatial and Time Variability of Ambient Vibration H/V Frequency Peak

Authors: N. Benkaci, E. Oubaiche, J.-L. Chatelain, R. Bensalem, K. Abbes

Abstract:

The ambient vibration H/V technique is widely used nowadays in microzonation studies, because of its easy field handling and its low cost, compared to other geophysical methods. However, in presence of complex geology or lateral heterogeneity evidenced by more than one peak frequency in the H/V curve, it is difficult to interpret the results, especially when soil information is lacking. In this work, we focus on the construction site of the Baraki 40000=place stadium, located in the north-east side of the Mitidja basin (Algeria), to identify the seismic wave amplification zones. H/V curve analysis leads to the observation of spatial and time variability of the H/V frequency peaks. The spatial variability allows dividing the studied area into three main zones: (1) one with a predominant frequency around 1,5 Hz showing an important amplification level, (2) the second exhibits two peaks at 1,5 Hz and in the 4 Hz – 10 Hz range, and (3) the third zone is characterized by a plateau between 2 Hz and 3 Hz. These H/V curve categories reveal a consequent lateral heterogeneity dividing the stadium site roughly in the middle. Furthermore, a continuous ambient vibration recording during several weeks allows showing that the first peak at 1,5 Hz in the second zone, completely disappears between 2 am and 4 am, and reaching its maximum amplitude around 12 am. Consequently, the anthropogenic noise source generating these important variations could be the Algiers Rocade Sud highway, located in the maximum amplification azimuth direction of the H/V curves. This work points out that the H/V method is an important tool to perform nano-zonation studies prior to geotechnical and geophysical investigations, and that, in some cases, the H/V technique fails to reveal the resonance frequency in the absence of strong anthropogenic source.

Keywords: ambient vibrations, amplification, fundamental frequency, lateral heterogeneity, site effect

Procedia PDF Downloads 235
272 Risk Assessment of Lead Element in Red Peppers Collected from Marketplaces in Antalya, Southern Turkey

Authors: Serpil Kilic, Ihsan Burak Cam, Murat Kilic, Timur Tongur

Abstract:

Interest in the lead (Pb) has considerably increased due to knowledge about the potential toxic effects of this element, recently. Exposure to heavy metals above the acceptable limit affects human health. Indeed, Pb is accumulated through food chains up to toxic concentrations; therefore, it can pose an adverse potential threat to human health. A sensitive and reliable method for determination of Pb element in red pepper were improved in the present study. Samples (33 red pepper products having different brands) were purchased from different markets in Turkey. The selected method validation criteria (linearity, Limit of Detection, Limit of Quantification, recovery, and trueness) demonstrated. Recovery values close to 100% showed adequate precision and accuracy for analysis. According to the results of red pepper analysis, all of the tested lead element in the samples was determined at various concentrations. A Perkin- Elmer ELAN DRC-e model ICP-MS system was used for detection of Pb. Organic red pepper was used to obtain a matrix for all method validation studies. The certified reference material, Fapas chili powder, was digested and analyzed, together with the different sample batches. Three replicates from each sample were digested and analyzed. The results of the exposure levels of the elements were discussed considering the scientific opinions of the European Food Safety Authority (EFSA), which is the European Union’s (EU) risk assessment source associated with food safety. The Target Hazard Quotient (THQ) was described by the United States Environmental Protection Agency (USEPA) for the calculation of potential health risks associated with long-term exposure to chemical pollutants. THQ value contains intake of elements, exposure frequency and duration, body weight and the oral reference dose (RfD). If the THQ value is lower than one, it means that the exposed population is assumed to be safe and 1 < THQ < 5 means that the exposed population is in a level of concern interval. In this study, the THQ of Pb was obtained as < 1. The results of THQ calculations showed that the values were below one for all the tested, meaning the samples did not pose a health risk to the local population. This work was supported by The Scientific Research Projects Coordination Unit of Akdeniz University. Project Number: FBA-2017-2494.

Keywords: lead analyses, red pepper, risk assessment, daily exposure

Procedia PDF Downloads 165
271 Assessment of Climate Induced Hazards in Coastal Zone of Bangladesh: A Case Study of Koyra Upazilla under Khulna District and Shyamnagar Upazilla under Satkhira District

Authors: Kazi Ashief Mahmood

Abstract:

Geographically Bangladesh is located in a natural hazard prone area. Compared to the rest of the areas, the coastal sub-districts are more vulnerable to climate variability and change. However, the hydro-geophysical reality of the sub-districts predominantly determines their contexts of vulnerability and its nature differs accordingly. Intriguingly enough, the poorest of the areas appear to be the most cornered among the different vulnerable sectors. Among of these deprived segments; however, the women, the persons with disability and the minorities are generally more vulnerable and they face a high risk of marginalized. The most threatening hydro-geophysical climate vulnerability have been created by prolonged dry season as observed at Koyra Upazilla in Khulna districts and Shyamnagar in Satkhira districts. The prolonged dry season creates severe surface salinity by which farmers cannot produce or use their to cultivate. The absence of land-based production and employment in the area has led to severe food insecurity. As a result, farmers tend to change their livelihood option and many of them are forced to migrate to the other areas of the country in search of livelihood. Besides salinity intrusion, water logging, drought and different climate change induced hazards are endangering safe drinking water sources and putting small-holders out of agriculture-based livelihoods in the Koyra and Shyamnagar Upazilla. A sizeable fraction of small-holders are still trying to hold on to their small scale shrimp production, despite being under pressure to sell off their cultivating lands to their influential shrimp merchants. While their desperate effort to take advantage of the increasing salinity is somewhat successful, their families still face a greater risk of health hazards owing to the lack of safe drinking water. Unless the issues of salinity in drinking water cannot be redressed, the state of the affected people will be in great jeopardy. Most of the inhabitants of oKyra and Shyamnagar Upazilla are living under the poverty line. Thus, poverty is a major factor that intensifies the vulnerability caused by hydro-geophysical climatic conditions. The government and different NGOs are trying to improve the present scenario by implementing different disaster risk reduction projects along with poverty reduction for community empowerment.

Keywords: assessment, climate change, climate induced hazards, coastal zone

Procedia PDF Downloads 400
270 Weight Estimation Using the K-Means Method in Steelmaking’s Overhead Cranes in Order to Reduce Swing Error

Authors: Seyedamir Makinejadsanij

Abstract:

One of the most important factors in the production of quality steel is to know the exact weight of steel in the steelmaking area. In this study, a calculation method is presented to estimate the exact weight of the melt as well as the objects transported by the overhead crane. Iran Alloy Steel Company's steelmaking area has three 90-ton cranes, which are responsible for transferring the ladles and ladle caps between 34 areas in the melt shop. Each crane is equipped with a Disomat Tersus weighing system that calculates and displays real-time weight. The moving object has a variable weight due to swinging, and the weighing system has an error of about +-5%. This means that when the object is moving by a crane, which weighs about 80 tons, the device (Disomat Tersus system) calculates about 4 tons more or 4 tons less, and this is the biggest problem in calculating a real weight. The k-means algorithm is an unsupervised clustering method that was used here. The best result was obtained by considering 3 centers. Compared to the normal average(one) or two, four, five, and six centers, the best answer is with 3 centers, which is logically due to the elimination of noise above and below the real weight. Every day, the standard weight is moved with working cranes to test and calibrate cranes. The results are shown that the accuracy is about 40 kilos per 60 tons (standard weight). As a result, with this method, the accuracy of moving weight is calculated as 99.95%. K-means is used to calculate the exact mean of objects. The stopping criterion of the algorithm is also the number of 1000 repetitions or not moving the points between the clusters. As a result of the implementation of this system, the crane operator does not stop while moving objects and continues his activity regardless of weight calculations. Also, production speed increased, and human error decreased.

Keywords: k-means, overhead crane, melt weight, weight estimation, swing problem

Procedia PDF Downloads 88
269 Multi-Objective Optimal Design of a Cascade Control System for a Class of Underactuated Mechanical Systems

Authors: Yuekun Chen, Yousef Sardahi, Salam Hajjar, Christopher Greer

Abstract:

This paper presents a multi-objective optimal design of a cascade control system for an underactuated mechanical system. Cascade control structures usually include two control algorithms (inner and outer). To design such a control system properly, the following conflicting objectives should be considered at the same time: 1) the inner closed-loop control must be faster than the outer one, 2) the inner loop should fast reject any disturbance and prevent it from propagating to the outer loop, 3) the controlled system should be insensitive to measurement noise, and 4) the controlled system should be driven by optimal energy. Such a control problem can be formulated as a multi-objective optimization problem such that the optimal trade-offs among these design goals are found. To authors best knowledge, such a problem has not been studied in multi-objective settings so far. In this work, an underactuated mechanical system consisting of a rotary servo motor and a ball and beam is used for the computer simulations, the setup parameters of the inner and outer control systems are tuned by NSGA-II (Non-dominated Sorting Genetic Algorithm), and the dominancy concept is used to find the optimal design points. The solution of this problem is not a single optimal cascade control, but rather a set of optimal cascade controllers (called Pareto set) which represent the optimal trade-offs among the selected design criteria. The function evaluation of the Pareto set is called the Pareto front. The solution set is introduced to the decision-maker who can choose any point to implement. The simulation results in terms of Pareto front and time responses to external signals show the competing nature among the design objectives. The presented study may become the basis for multi-objective optimal design of multi-loop control systems.

Keywords: cascade control, multi-Loop control systems, multiobjective optimization, optimal control

Procedia PDF Downloads 148
268 Optimizing Super Resolution Generative Adversarial Networks for Resource-Efficient Single-Image Super-Resolution via Knowledge Distillation and Weight Pruning

Authors: Hussain Sajid, Jung-Hun Shin, Kum-Won Cho

Abstract:

Image super-resolution is the most common computer vision problem with many important applications. Generative adversarial networks (GANs) have promoted remarkable advances in single-image super-resolution (SR) by recovering photo-realistic images. However, high memory requirements of GAN-based SR (mainly generators) lead to performance degradation and increased energy consumption, making it difficult to implement it onto resource-constricted devices. To relieve such a problem, In this paper, we introduce an optimized and highly efficient architecture for SR-GAN (generator) model by utilizing model compression techniques such as Knowledge Distillation and pruning, which work together to reduce the storage requirement of the model also increase in their performance. Our method begins with distilling the knowledge from a large pre-trained model to a lightweight model using different loss functions. Then, iterative weight pruning is applied to the distilled model to remove less significant weights based on their magnitude, resulting in a sparser network. Knowledge Distillation reduces the model size by 40%; pruning then reduces it further by 18%. To accelerate the learning process, we employ the Horovod framework for distributed training on a cluster of 2 nodes, each with 8 GPUs, resulting in improved training performance and faster convergence. Experimental results on various benchmarks demonstrate that the proposed compressed model significantly outperforms state-of-the-art methods in terms of peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and image quality for x4 super-resolution tasks.

Keywords: single-image super-resolution, generative adversarial networks, knowledge distillation, pruning

Procedia PDF Downloads 88
267 A Compact Extended Laser Diode Cavity Centered at 780 nm for Use in High-Resolution Laser Spectroscopy

Authors: J. Alvarez, J. Pimienta, R. Sarmiento

Abstract:

Diode lasers working in free mode present different shifting and broadening determined by external factors such as temperature, current or mechanical vibrations, and they are not more useful in applications such as spectroscopy, metrology, and cooling of atoms, among others. Different configurations can reduce the spectral width of a laser; one of the most effective is to extend the optical resonator of the laser diode and use optical feedback either with the help of a partially reflective mirror or with a diffraction grating; this latter configuration is not only allowed to reduce the spectral width of the laser line but also to coarsely adjust its working wavelength, within a wide range typically ~ 10nm by slightly varying the angle of the diffraction grating. Two settings are commonly used for this purpose, the Littrow configuration and the Littmann Metcalf. In this paper, we present the design, construction, and characterization of a compact extended laser cavity in Littrow configuration. The designed cavity is compact and was machined on an aluminum block using computer numerical control (CNC); it has a mass of only 380 g. The design was tested on laser diodes with different wavelengths, 650nm, 780nm, and 795 nm, but can be equally efficient at other wavelengths. This report details the results obtained from the extended cavity working at a wavelength of 780 nm, with an output power of around 35mW and a line width of less than 1Mhz. The cavity was used to observe the spectrum of the corresponding Rubidium D2 line. By modulating the current and with the help of phase detection techniques, a dispersion signal with an excellent signal-to-noise ratio was generated that allowed the stabilization of the laser to a transition of the hyperfine structure of Rubidium with an integral proportional controller (PI) circuit made with precision operational amplifiers.

Keywords: Littrow, Littman-Metcalf, line width, laser stabilization, hyperfine structure

Procedia PDF Downloads 222
266 Marker-Controlled Level-Set for Segmenting Breast Tumor from Thermal Images

Authors: Swathi Gopakumar, Sruthi Krishna, Shivasubramani Krishnamoorthy

Abstract:

Contactless, painless and radiation-free thermal imaging technology is one of the preferred screening modalities for detection of breast cancer. However, poor signal to noise ratio and the inexorable need to preserve edges defining cancer cells and normal cells, make the segmentation process difficult and hence unsuitable for computer-aided diagnosis of breast cancer. This paper presents key findings from a research conducted on the appraisal of two promising techniques, for the detection of breast cancer: (I) marker-controlled, Level-set segmentation of anisotropic diffusion filtered preprocessed image versus (II) Segmentation using marker-controlled level-set on a Gaussian-filtered image. Gaussian-filtering processes the image uniformly, whereas anisotropic filtering processes only in specific areas of a thermographic image. The pre-processed (Gaussian-filtered and anisotropic-filtered) images of breast samples were then applied for segmentation. The segmentation of breast starts with initial level-set function. In this study, marker refers to the position of the image to which initial level-set function is applied. The markers are generally placed on the left and right side of the breast, which may vary with the breast size. The proposed method was carried out on images from an online database with samples collected from women of varying breast characteristics. It was observed that the breast was able to be segmented out from the background by adjustment of the markers. From the results, it was observed that as a pre-processing technique, anisotropic filtering with level-set segmentation, preserved the edges more effectively than Gaussian filtering. Segmented image, by application of anisotropic filtering was found to be more suitable for feature extraction, enabling automated computer-aided diagnosis of breast cancer.

Keywords: anisotropic diffusion, breast, Gaussian, level-set, thermograms

Procedia PDF Downloads 374
265 Improvement of Thermal Comfort Conditions in an Urban Space "Case Study: The Square of Independence, Setif, Algeria"

Authors: Ballout Amor, Yasmina Bouchahm, Lacheheb Dhia Eddine Zakaria

Abstract:

Several studies all around the world were conducted on the phenomenon of the urban heat island, and referring to the results obtained, one of the most important factors that influence this phenomenon is the mineralization of the cities which means the reducing of evaporative urban surfaces, replacing vegetation and wetlands with concrete and asphalt. The use of vegetation and water can change the urban environment and improve comfort, thus reduce the heat island. The trees act as a mask to the sun, wind, and sound, and also as a source of humidity which reduces air temperature and surrounding surfaces. Water also acts as a buffer to noise; it is also a source of moisture and regulates temperature not to mention the psychological effect on humans. Our main objective in this paper is to determine the impact of vegetation, ponds and fountains on the urban micro climate in general and on the thermal comfort of people along the Independence square in the Algerian city of Sétif, which is a semi-arid climate, in particularly. In order to reach this objective, a comparative study between different scenarios has been done; the use of the Envi-met program enabled us to model the urban environment of the Independence Square and to study the possibility of improving the conditions of comfort by adding an amount of vegetation and water ponds. After studying the results obtained (temperature, relative humidity, wind speed, PMV and PPD indicators), the efficiency of the additions we've made on the square was confirmed and this is what helped us to confirm our assumptions regarding the terms of comfort in the studied site, and in the end we are trying to develop recommendations and solutions which may contribute to improve the conditions for greater comfort in the Independence square.

Keywords: comfort in outer space, urban environment, scenarisation, vegetation, water ponds, public square, simulation

Procedia PDF Downloads 450
264 Building Transparent Supply Chains through Digital Tracing

Authors: Penina Orenstein

Abstract:

In today’s world, particularly with COVID-19 a constant worldwide threat, organizations need greater visibility over their supply chains more than ever before, in order to find areas for improvement and greater efficiency, reduce the chances of disruption and stay competitive. The concept of supply chain mapping is one where every process and route is mapped in detail between each vendor and supplier. The simplest method of mapping involves sourcing publicly available data including news and financial information concerning relationships between suppliers. An additional layer of information would be disclosed by large, direct suppliers about their production and logistics sites. While this method has the advantage of not requiring any input from suppliers, it also doesn’t allow for much transparency beyond the first supplier tier and may generate irrelevant data—noise—that must be filtered out to find the actionable data. The primary goal of this research is to build data maps of supply chains by focusing on a layered approach. Using these maps, the secondary goal is to address the question as to whether the supply chain is re-engineered to make improvements, for example, to lower the carbon footprint. Using a drill-down approach, the end result is a comprehensive map detailing the linkages between tier-one, tier-two, and tier-three suppliers super-imposed on a geographical map. The driving force behind this idea is to be able to trace individual parts to the exact site where they’re manufactured. In this way, companies can ensure sustainability practices from the production of raw materials through the finished goods. The approach allows companies to identify and anticipate vulnerabilities in their supply chain. It unlocks predictive analytics capabilities and enables them to act proactively. The research is particularly compelling because it unites network science theory with empirical data and presents the results in a visual, intuitive manner.

Keywords: data mining, supply chain, empirical research, data mapping

Procedia PDF Downloads 172
263 An Experimental Study on the Coupled Heat Source and Heat Sink Effects on Solid Rockets

Authors: Vinayak Malhotra, Samanyu Raina, Ajinkya Vajurkar

Abstract:

Enhancing the rocket efficiency by controlling the external factors in solid rockets motors has been an active area of research for most of the terrestrial and extra-terrestrial system operations. Appreciable work has been done, but the complexity of the problem has prevented thorough understanding due to heterogenous heat and mass transfer. On record, severe issues have surfaced amounting to irreplaceable loss of mankind, instruments, facilities, and huge amount of money being invested every year. The coupled effect of an external heat source and external heat sink is an aspect yet to be articulated in combustion. Better understanding of this coupled phenomenon will induce higher safety standards, efficient missions, reduced hazard risks, with better designing, validation, and testing. The experiment will help in understanding the coupled effect of an external heat sink and heat source on the burning process, contributing in better combustion and fire safety, which are very important for efficient and safer rocket flights and space missions. Safety is the most prevalent issue in rockets, which assisted by poor combustion efficiency, emphasizes research efforts to evolve superior rockets. This signifies real, engineering, scientific, practical, systems and applications. One potential application is Solid Rocket Motors (S.R.M). The study may help in: (i) Understanding the effect on efficiency of core engines due to the primary boosters if considered as source, (ii) Choosing suitable heat sink materials for space missions so as to vary the efficiency of the solid rocket depending on the mission, (iii) Giving an idea about how the preheating of the successive stage due to previous stage acting as a source may affect the mission. The present work governs the temperature (resultant) and thus the heat transfer which is expected to be non-linear because of heterogeneous heat and mass transfer. The study will deepen the understanding of controlled inter-energy conversions and the coupled effect of external source/sink(s) surrounding the burning fuel eventually leading to better combustion thus, better propulsion. The work is motivated by the need to have enhanced fire safety and better rocket efficiency. The specific objective of the work is to understand the coupled effect of external heat source and sink on propellant burning and to investigate the role of key controlling parameters. Results as of now indicate that there exists a singularity in the coupled effect. The dominance of the external heat sink and heat source decides the relative rocket flight in Solid Rocket Motors (S.R.M).

Keywords: coupled effect, heat transfer, sink, solid rocket motors, source

Procedia PDF Downloads 219
262 Introduction of Mass Rapid Transit System and Its Impact on Para-Transit

Authors: Khalil Ahmad Kakar

Abstract:

In developing countries increasing the automobile and low capacity public transport (para-transit) which are creating congestion, pollution, noise, and traffic accident are the most critical quandary. These issues are under the analysis of assessors to break down the puzzle and propose sustainable urban public transport system. Kabul city is one of those urban areas that the inhabitants are suffering from lack of tolerable and friendly public transport system. The city is the most-populous and overcrowded with around 4.5 million population. The para-transit is the only dominant public transit system with a very poor level of services and low capacity vehicles (6-20 passengers). Therefore, this study after detailed investigations suggests bus rapid transit (BRT) system in Kabul City. It is aimed to mitigate the role of informal transport and decreases congestion. The research covers three parts. In the first part, aggregated travel demand modelling (four-step) is applied to determine the number of users for para-transit and assesses BRT network based on higher passenger demand for public transport mode. In the second part, state preference (SP) survey and binary logit model are exerted to figure out the utility of existing para-transit mode and planned BRT system. Finally, the impact of predicted BRT system on para-transit is evaluated. The extracted outcome based on high travel demand suggests 10 km network for the proposed BRT system, which is originated from the district tenth and it is ended at Kabul International Airport. As well as, the result from the disaggregate travel mode-choice model, based on SP and logit model indicates that the predicted mass rapid transit system has higher utility with the significant impact regarding the reduction of para-transit.

Keywords: BRT, para-transit, travel demand modelling, Kabul City, logit model

Procedia PDF Downloads 181
261 Association of Genetically Proxied Cholesterol-Lowering Drug Targets and Head and Neck Cancer Survival: A Mendelian Randomization Analysis

Authors: Danni Cheng

Abstract:

Background: Preclinical and epidemiological studies have reported potential protective effects of low-density lipoprotein cholesterol (LDL-C) lowering drugs on head and neck squamous cell cancer (HNSCC) survival, but the causality was not consistent. Genetic variants associated with LDL-C lowering drug targets can predict the effects of their therapeutic inhibition on disease outcomes. Objective: We aimed to evaluate the causal association of genetically proxied cholesterol-lowering drug targets and circulating lipid traits with cancer survival in HNSCC patients stratified by human papillomavirus (HPV) status using two-sample Mendelian randomization (MR) analyses. Method: Single-nucleotide polymorphisms (SNPs) in gene region of LDL-C lowering drug targets (HMGCR, NPC1L1, CETP, PCSK9, and LDLR) associated with LDL-C levels in genome-wide association study (GWAS) from the Global Lipids Genetics Consortium (GLGC) were used to proxy LDL-C lowering drug action. SNPs proxy circulating lipids (LDL-C, HDL-C, total cholesterol, triglycerides, apoprotein A and apoprotein B) were also derived from the GLGC data. Genetic associations of these SNPs and cancer survivals were derived from 1,120 HPV-positive oropharyngeal squamous cell carcinoma (OPSCC) and 2,570 non-HPV-driven HNSCC patients in VOYAGER program. We estimated the causal associations of LDL-C lowering drugs and circulating lipids with HNSCC survival using the inverse-variance weighted method. Results: Genetically proxied HMGCR inhibition was significantly associated with worse overall survival (OS) in non-HPV-drive HNSCC patients (inverse variance-weighted hazard ratio (HR IVW), 2.64[95%CI,1.28-5.43]; P = 0.01) but better OS in HPV-positive OPSCC patients (HR IVW,0.11[95%CI,0.02-0.56]; P = 0.01). Estimates for NPC1L1 were strongly associated with worse OS in both total HNSCC (HR IVW,4.17[95%CI,1.06-16.36]; P = 0.04) and non-HPV-driven HNSCC patients (HR IVW,7.33[95%CI,1.63-32.97]; P = 0.01). A similar result was found that genetically proxied PSCK9 inhibitors were significantly associated with poor OS in non-HPV-driven HNSCC (HR IVW,1.56[95%CI,1.02 to 2.39]). Conclusion: Genetically proxied long-term HMGCR inhibition was significantly associated with decreased OS in non-HPV-driven HNSCC and increased OS in HPV-positive OPSCC. While genetically proxied NPC1L1 and PCSK9 had associations with worse OS in total and non-HPV-driven HNSCC patients. Further research is needed to understand whether these drugs have consistent associations with head and neck tumor outcomes.

Keywords: Mendelian randomization analysis, head and neck cancer, cancer survival, cholesterol, statin

Procedia PDF Downloads 94
260 Understanding the Reasons for Flooding in Chennai and Strategies for Making It Flood Resilient

Authors: Nivedhitha Venkatakrishnan

Abstract:

Flooding in urban areas in India has become a usual ritual phenomenon and a nightmare to most cities, which is a consequence of man-made disruption resulting in disaster. The City planning in India falls short of withstanding hydro generated disasters. This has become a barrier and challenge in the process of development put forth by urbanization, high population density, expanding informal settlements, environment degradation from uncollected and untreated waste that flows into natural drains and water bodies, this has disrupted the natural mechanism of hazard protection such as drainage channels, wetlands and floodplains. The magnitude and the impact of the mishap was high because of the failure of development policies, strategies, plans that the city had adopted. In the current scenario, cities are becoming the home for future, with economic diversification bringing in more investment into cities especially in domains of Urban infrastructure, planning and design. The uncertainty of the Urban futures in these low elevated coastal zones faces an unprecedented risk and threat. The study on focuses on three major pillars of resilience such as Recover, Resist and Restore. This process of getting ready to handle the situation bridges the gap between disaster response management and risk reduction requires a shift in paradigm. The study involved a qualitative research and a system design approach (framework). The initial stages involved mapping out of the urban water morphology with respect to the spatial growth gave an insight of the water bodies that have gone missing over the years during the process of urbanization. The major finding of the study was missing links between traditional water harvesting network was a major reason resulting in a manmade disaster. The research conceptualized the ideology of a sponge city framework which would guide the growth through institutional frameworks at different levels. The next stage was on understanding the implementation process at various stage to ensure the shift in paradigm. Demonstration of the concepts at a neighborhood level where, how, what are the functions and benefits of each component. Quantifying the design decision with rainwater harvest, surface runoff and how much water is collected and how it could be collected, stored and reused. The study came with further recommendation for Water Mitigation Spaces that will revive the traditional harvesting network.

Keywords: flooding, man made disaster, resilient city, traditional harvesting network, waterbodies

Procedia PDF Downloads 137
259 Optimization and Evaluation of Different Pathways to Produce Biofuel from Biomass

Authors: Xiang Zheng, Zhaoping Zhong

Abstract:

In this study, Aspen Plus was used to simulate the whole process of biomass conversion to liquid fuel in different ways, and the main results of material and energy flow were obtained. The process optimization and evaluation were carried out on the four routes of cellulosic biomass pyrolysis gasification low-carbon olefin synthesis olefin oligomerization, biomass water pyrolysis and polymerization to jet fuel, biomass fermentation to ethanol, and biomass pyrolysis to liquid fuel. The environmental impacts of three biomass species (poplar wood, corn stover, and rice husk) were compared by the gasification synthesis pathway. The global warming potential, acidification potential, and eutrophication potential of the three biomasses were the same as those of rice husk > poplar wood > corn stover. In terms of human health hazard potential and solid waste potential, the results were poplar > rice husk > corn stover. In the popular pathway, 100 kg of poplar biomass was input to obtain 11.9 kg of aviation coal fraction and 6.3 kg of gasoline fraction. The energy conversion rate of the system was 31.6% when the output product energy included only the aviation coal product. In the basic process of hydrothermal depolymerization process, 14.41 kg aviation kerosene was produced per 100 kg biomass. The energy conversion rate of the basic process was 33.09%, which can be increased to 38.47% after the optimal utilization of lignin gasification and steam reforming for hydrogen production. The total exergy efficiency of the system increased from 30.48% to 34.43% after optimization, and the exergy loss mainly came from the concentration of precursor dilute solution. Global warming potential in environmental impact is mostly affected by the production process. Poplar wood was used as raw material in the process of ethanol production from cellulosic biomass. The simulation results showed that 827.4 kg of pretreatment mixture, 450.6 kg of fermentation broth, and 24.8 kg of ethanol were produced per 100 kg of biomass. The power output of boiler combustion reached 94.1 MJ, the unit power consumption in the process was 174.9 MJ, and the energy conversion rate was 33.5%. The environmental impact was mainly concentrated in the production process and agricultural processes. On the basis of the original biomass pyrolysis to liquid fuel, the enzymatic hydrolysis lignin residue produced by cellulose fermentation to produce ethanol was used as the pyrolysis raw material, and the fermentation and pyrolysis processes were coupled. In the coupled process, 24.8 kg ethanol and 4.78 kg upgraded liquid fuel were produced per 100 kg biomass with an energy conversion rate of 35.13%.

Keywords: biomass conversion, biofuel, process optimization, life cycle assessment

Procedia PDF Downloads 67
258 Perception of Hazards and Risks in Road Utilization as Space for Social Ceremonies in Indigenous Residential Area of Ogbomoso, Nigeria

Authors: Okanlawon Simon Ayorinde, Odunjo Oluronke Omolola, Fadamiro Joseph Akinlabi, Adedibu Afolabi Adebgite

Abstract:

A road is a path established over land, especially prepared way between places for the use of pedestrian, riders, and vehicles: a hard surface built for vehicles to travel on. The social, economic and health importance of roads in any community and nation cannot be underestimated. Roads provide access to properties and they also provide mobility which is ability to transport goods and services from one place to another. In the residential zones of many indigenous cities in Nigeria, roads are usually blocked for social ceremonies. Road blocked for ceremonies as used in this study are a temporary barrier across a road, used to stop or hinder traffic from passing through to the other side. Social ceremonies that could warrant road blockage include marriage, child naming, funeral, celebration of life’s achievement, birthday anniversary etc. These activities are likely to generate environmental hazards and their attendant risks. The assessment of these hazards and risks in residential zones of indigenous cities in Nigeria becomes imperative. The study is focused on Ogbomoso, Oyo State, Nigeria. The town has two local government councils namely Ogbomoso North and Ogbomoso South. Urban tracts that are easy to identify are political wards in the absence of land use segregation, houses numbering and street naming. The wards that had residential having a minimum of 60% of their land use components were surveyed and fifteen out of twenty wards identified in the town were surveyed. The study utilized primary data collected through questionnaire administration The three major road categories (Trunk A-Federal; Trunk B- State; Trunk C-Local) were identified and trunk C-Local roads were purposively selected being the concern of this study because they are the ones often blocked for social activities. The major stakeholders interviewed and the respective sampling methods are residents (random and systematic), social ceremony organizers (purposive), government officials (purposive) and road users namely commercial motorists and commercial motor cyclists (random and incidental). Data analysis was mainly descriptive. Two indices to measure respondents’ perception were developed. These are ‘Hazard Severity Index’ (HSI) and ‘Relative Awareness Index’ (RAI).Thereafter, policy implications and recommendations were provided.

Keywords: road, residential zones, indigenous cities, blocked, social ceremonies

Procedia PDF Downloads 516
257 The Carbon Footprint Model as a Plea for Cities towards Energy Transition: The Case of Algiers Algeria

Authors: Hachaichi Mohamed Nour El-Islem, Baouni Tahar

Abstract:

Environmental sustainability rather than a trans-disciplinary and a scientific issue, is the main problem that characterizes all modern cities nowadays. In developing countries, this concern is expressed in a plethora of critical urban ills: traffic congestion, air pollution, noise, urban decay, increase in energy consumption and CO2 emissions which blemish cities’ landscape and might threaten citizens’ health and welfare. As in the same manner as developing world cities, the rapid growth of Algiers’ human population and increasing in city scale phenomena lead eventually to increase in daily trips, energy consumption and CO2 emissions. In addition, the lack of proper and sustainable planning of the city’s infrastructure is one of the most relevant issues from which Algiers suffers. The aim of this contribution is to estimate the carbon deficit of the City of Algiers, Algeria, using the Ecological Footprint Model (carbon footprint). In order to achieve this goal, the amount of CO2 from fuel combustion has been calculated and aggregated into five sectors (agriculture, industry, residential, tertiary and transportation); as well, Algiers’ biocapacity (CO2 uptake land) has been calculated to determine the ecological overshoot. This study shows that Algiers’ transport system is not sustainable and is generating more than 50% of Algiers total carbon footprint which cannot be sequestered by the local forest land. The aim of this research is to show that the Carbon Footprint Assessment might be a relevant indicator to design sustainable strategies/policies striving to reduce CO2 by setting in motion the energy consumption in the transportation sector and reducing the use of fossil fuels as the main energy input.

Keywords: biocapacity, carbon footprint, ecological footprint assessment, energy consumption

Procedia PDF Downloads 143
256 Quantitative Evaluation of Supported Catalysts Key Properties from Electron Tomography Studies: Assessing Accuracy Using Material-Realistic 3D-Models

Authors: Ainouna Bouziane

Abstract:

The ability of Electron Tomography to recover the 3D structure of catalysts, with spatial resolution in the subnanometer scale, has been widely explored and reviewed in the last decades. A variety of experimental techniques, based either on Transmission Electron Microscopy (TEM) or Scanning Transmission Electron Microscopy (STEM) have been used to reveal different features of nanostructured catalysts in 3D, but High Angle Annular Dark Field imaging in STEM mode (HAADF-STEM) stands out as the most frequently used, given its chemical sensitivity and avoidance of imaging artifacts related to diffraction phenomena when dealing with crystalline materials. In this regard, our group has developed a methodology that combines image denoising by undecimated wavelet transforms (UWT) with automated, advanced segmentation procedures and parameter selection methods using CS-TVM (Compressed Sensing-total variation minimization) algorithms to reveal more reliable quantitative information out of the 3D characterization studies. However, evaluating the accuracy of the magnitudes estimated from the segmented volumes is also an important issue that has not been properly addressed yet, because a perfectly known reference is needed. The problem particularly complicates in the case of multicomponent material systems. To tackle this key question, we have developed a methodology that incorporates volume reconstruction/segmentation methods. In particular, we have established an approach to evaluate, in quantitative terms, the accuracy of TVM reconstructions, which considers the influence of relevant experimental parameters like the range of tilt angles, image noise level or object orientation. The approach is based on the analysis of material-realistic, 3D phantoms, which include the most relevant features of the system under analysis.

Keywords: electron tomography, supported catalysts, nanometrology, error assessment

Procedia PDF Downloads 80