Search results for: spatial audio processing
5578 High Efficient Biohydrogen Production from Cassava Starch Processing Wastewater by Two Stage Thermophilic Fermentation and Electrohydrogenesis
Authors: Peerawat Khongkliang, Prawit Kongjan, Tsuyoshi Imai, Poonsuk Prasertsan, Sompong O-Thong
Abstract:
A two-stage thermophilic fermentation and electrohydrogenesis process was used to convert cassava starch processing wastewater into hydrogen gas. Maximum hydrogen yield from fermentation stage by Thermoanaerobacterium thermosaccharolyticum PSU-2 was 248 mL H2/g-COD at optimal pH of 6.5. Optimum hydrogen production rate of 820 mL/L/d and yield of 200 mL/g COD was obtained at HRT of 2 days in fermentation stage. Cassava starch processing wastewater fermentation effluent consisted of acetic acid, butyric acid and propionic acid. The effluent from fermentation stage was used as feedstock to generate hydrogen production by microbial electrolysis cell (MECs) at an applied voltage of 0.6 V in second stage with additional 657 mL H2/g-COD was produced. Energy efficiencies based on electricity needed for the MEC were 330 % with COD removals of 95 %. The overall hydrogen yield was 800-900 mL H2/g-COD. Microbial community analysis of electrohydrogenesis by DGGE shows that exoelectrogens belong to Acidiphilium sp., Geobacter sulfurreducens and Thermincola sp. were dominated at anode. These results show two-stage thermophilic fermentation, and electrohydrogenesis process improved hydrogen production performance with high hydrogen yields, high gas production rates and high COD removal efficiency.Keywords: cassava starch processing wastewater, biohydrogen, thermophilic fermentation, microbial electrolysis cell
Procedia PDF Downloads 3435577 Multichannel Surface Electromyography Trajectories for Hand Movement Recognition Using Intrasubject and Intersubject Evaluations
Authors: Christina Adly, Meena Abdelmeseeh, Tamer Basha
Abstract:
This paper proposes a system for hand movement recognition using multichannel surface EMG(sEMG) signals obtained from 40 subjects using 40 different exercises, which are available on the Ninapro(Non-Invasive Adaptive Prosthetics) database. First, we applied processing methods to the raw sEMG signals to convert them to their amplitudes. Second, we used deep learning methods to solve our problem by passing the preprocessed signals to Fully connected neural networks(FCNN) and recurrent neural networks(RNN) with Long Short Term Memory(LSTM). Using intrasubject evaluation, The accuracy using the FCNN is 72%, with a processing time for training around 76 minutes, and for RNN's accuracy is 79.9%, with 8 minutes and 22 seconds processing time. Third, we applied some postprocessing methods to improve the accuracy, like majority voting(MV) and Movement Error Rate(MER). The accuracy after applying MV is 75% and 86% for FCNN and RNN, respectively. The MER value has an inverse relationship with the prediction delay while varying the window length for measuring the MV. The different part uses the RNN with the intersubject evaluation. The experimental results showed that to get a good accuracy for testing with reasonable processing time, we should use around 20 subjects.Keywords: hand movement recognition, recurrent neural network, movement error rate, intrasubject evaluation, intersubject evaluation
Procedia PDF Downloads 1425576 Influence of Urban Microclimates on Human Perceptions and Behavioral Patterns: A Relational Context of Human Parameters in Urban Design
Authors: Naveed Mazhar
Abstract:
Our cities are known to have significant modifying effects on the local climate. The nature of the modifications depends on a range of physical variables, usually assessed at a wide range of spatial scales. Physical spatial dimensions, such as measured parameters of microclimates and their significant influence on human sensations, are known to have far-reaching effects on human thermal comfort and by corollary a force that influences human perception. Less scholarship has thrown light on the subjective dimension and insufficiently demonstrates a relational approach between human behavior and how it is affected by the phenomenon of urban microclimates. Other than identifying gaps in the most recent scholarship and providing future research opportunities, the scope of this study will help improve urban design guidelines and raise framework standards of socially responsive urban design. This study will help equip future professionals to ameliorate the effects of urban microclimates on participant’s perceptions enabling more frequent usage of the outdoor urban spaces. However, it is informed that the physical parameters of an outdoor open space determine psychological human adaptations and is a measure of the degree to which people are willing to adapt to their surroundings. A large amount of research is available related to urban microclimates. However, very few studies are focused on the elucidation of the critical factors influencing human perceptions of the microclimates in urban spatial configurations. Based on the most recent scholarship, this study has evaluated the role urban microclimatic conditions have in the formation of human perceptions and, by extension, behavioral patterns formulating in outdoor open spaces. Furthermore, this study also defines, in the backdrop of the current scholarly literature, the socio-spatial interdependence of behavioral patterns with relationship to the built urban fabric and its resultant correlation with human perception. A comprehensive review and analysis of the recent research conducted within the scope of the study will help frame gaps, issues, current research methods and future research opportunities.Keywords: urban design, urban microcliamate, human perception, human behavioral patterns
Procedia PDF Downloads 3045575 Comparative Analysis of Two Approaches to Joint Signal Detection, ToA and AoA Estimation in Multi-Element Antenna Arrays
Authors: Olesya Bolkhovskaya, Alexey Davydov, Alexander Maltsev
Abstract:
In this paper two approaches to joint signal detection, time of arrival (ToA) and angle of arrival (AoA) estimation in multi-element antenna array are investigated. Two scenarios were considered: first one, when the waveform of the useful signal is known a priori and, second one, when the waveform of the desired signal is unknown. For first scenario, the antenna array signal processing based on multi-element matched filtering (MF) with the following non-coherent detection scheme and maximum likelihood (ML) parameter estimation blocks is exploited. For second scenario, the signal processing based on the antenna array elements covariance matrix estimation with the following eigenvector analysis and ML parameter estimation blocks is applied. The performance characteristics of both signal processing schemes are thoroughly investigated and compared for different useful signals and noise parameters.Keywords: antenna array, signal detection, ToA, AoA estimation
Procedia PDF Downloads 4975574 Lab Bench for Synthetic Aperture Radar Imaging System
Authors: Karthiyayini Nagarajan, P. V. Ramakrishna
Abstract:
Radar Imaging techniques provides extensive applications in the field of remote sensing, majorly Synthetic Aperture Radar (SAR) that provide high resolution target images. This paper work puts forward the effective and realizable signal generation and processing for SAR images. The major units in the system include camera, signal generation unit, signal processing unit and display screen. The real radio channel is replaced by its mathematical model based on optical image to calculate a reflected signal model in real time. Signal generation realizes the algorithm and forms the radar reflection model. Signal processing unit provides range and azimuth resolution through matched filtering and spectrum analysis procedure to form radar image on the display screen. The restored image has the same quality as that of the optical image. This SAR imaging system has been designed and implemented using MATLAB and Quartus II tools on Stratix III device as a System (Lab Bench) that works in real time to study/investigate on radar imaging rudiments and signal processing scheme for educational and research purposes.Keywords: synthetic aperture radar, radio reflection model, lab bench, imaging engineering
Procedia PDF Downloads 4985573 Effect of Two Different Method for Juice Processing on the Anthocyanins and Polyphenolics of Blueberry (Vaccinium corymbosum)
Authors: Onur Ercan, Buket Askin, Erdogan Kucukoner
Abstract:
Blueberry (Vaccinium corymbosum, bluegold) has become popular beverage due to their nutritional values such as vitamins, minerals, and antioxidants. In the study, the effects of pressing, mashing, enzymatic treatment, and pasteurization on anthocyanins, colour, and polyphenolics of blueberry juice (BJ) were studied. The blueberry juice (BJ) was produced with two different methods that direct juice extraction (DJE) and mash treatment process (MTP) were applied. After crude blueberry juice (CBJ) production, the samples were first treated with commercial enzymes [Novoferm-61 (Novozymes A/S) (2–10 mL/L)], to break down the hydrocolloid polysaccharides, mainly pectin and starch. The enzymes were added at various concentrations. The highest transmittance (%) was obtained for Novoferm-61 at a concentration of 2 mL/L was 66.53%. After enzymatic treatment, clarification trials were applied to the enzymatically treated BJs with adding various amounts of bentonite (10%, w/v), gelatin (1%, w/v) and kiselsol (15%, v/v). The turbidities of the clarified samples were then determined. However, there was no significant differences between transmittances (%) for samples. For that, only enzymatic treatment was applied to the blueberry juice processing (DDBJ, depectinized direct blueberry juice). Based on initial pressing process made to evaluate press function, it was determined that pressing fresh blueberries with no other processing did not render adequate juice due to lack of liquefaction. Therefore, the blueberries were mash into small pieces (3 mm) and then enzymatic treatments and clarification trials were performed. Finally, both BJ samples were pasteurized. Compositional analyses, colour properties, polyphenols and antioxidant properties were compared. Enzymatic treatment caused significant reductions in ACN content (30%) in Direct Blueberry Juice Processing (DBJ), while there was a significant increasing in Mash Treatment Processing (MTP). Overall anthocyanin levels were higher intreated samples after each processing step in MTP samples, but polyphenolic levels were slightly higher for both processes (DBJ and MTP). There was a reduction for ACNs and polyphenolics only after pasteurization. It has a result that the methods for tried to blueberry juice is suitable into obtain fresh juice. In addition, we examined fruit juice during processing stages; anthocyanin, phenolic substance content and antioxidant activity are higher, and yield is higher in fruit juice compared to DBJ method in MTP method, the MTP method should be preferred in processing juice of blueberry into fruit juice.Keywords: anthocyanins, blueberry, depectinization, polyphenols
Procedia PDF Downloads 945572 Quantitative Comparisons of Different Approaches for Rotor Identification
Authors: Elizabeth M. Annoni, Elena G. Tolkacheva
Abstract:
Atrial fibrillation (AF) is the most common sustained cardiac arrhythmia that is a known prognostic marker for stroke, heart failure and death. Reentrant mechanisms of rotor formation, which are stable electrical sources of cardiac excitation, are believed to cause AF. No existing commercial mapping systems have been demonstrated to consistently and accurately predict rotor locations outside of the pulmonary veins in patients with persistent AF. There is a clear need for robust spatio-temporal techniques that can consistently identify rotors using unique characteristics of the electrical recordings at the pivot point that can be applied to clinical intracardiac mapping. Recently, we have developed four new signal analysis approaches – Shannon entropy (SE), Kurtosis (Kt), multi-scale frequency (MSF), and multi-scale entropy (MSE) – to identify the pivot points of rotors. These proposed techniques utilize different cardiac signal characteristics (other than local activation) to uncover the intrinsic complexity of the electrical activity in the rotors, which are not taken into account in current mapping methods. We validated these techniques using high-resolution optical mapping experiments in which direct visualization and identification of rotors in ex-vivo Langendorff-perfused hearts were possible. Episodes of ventricular tachycardia (VT) were induced using burst pacing, and two examples of rotors were used showing 3-sec episodes of a single stationary rotor and figure-8 reentry with one rotor being stationary and one meandering. Movies were captured at a rate of 600 frames per second for 3 sec. with 64x64 pixel resolution. These optical mapping movies were used to evaluate the performance and robustness of SE, Kt, MSF and MSE techniques with respect to the following clinical limitations: different time of recordings, different spatial resolution, and the presence of meandering rotors. To quantitatively compare the results, SE, Kt, MSF and MSE techniques were compared to the “true” rotor(s) identified using the phase map. Accuracy was calculated for each approach as the duration of the time series and spatial resolution were reduced. The time series duration was decreased from its original length of 3 sec, down to 2, 1, and 0.5 sec. The spatial resolution of the original VT episodes was decreased from 64x64 pixels to 32x32, 16x16, and 8x8 pixels by uniformly removing pixels from the optical mapping video.. Our results demonstrate that Kt, MSF and MSE were able to accurately identify the pivot point of the rotor under all three clinical limitations. The MSE approach demonstrated the best overall performance, but Kt was the best in identifying the pivot point of the meandering rotor. Artifacts mildly affect the performance of Kt, MSF and MSE techniques, but had a strong negative impact of the performance of SE. The results of our study motivate further validation of SE, Kt, MSF and MSE techniques using intra-atrial electrograms from paroxysmal and persistent AF patients to see if these approaches can identify pivot points in a clinical setting. More accurate rotor localization could significantly increase the efficacy of catheter ablation to treat AF, resulting in a higher success rate for single procedures.Keywords: Atrial Fibrillation, Optical Mapping, Signal Processing, Rotors
Procedia PDF Downloads 3245571 Design and Implementation of a Lab Bench for Synthetic Aperture Radar Imaging System
Authors: Karthiyayini Nagarajan, P. V. RamaKrishna
Abstract:
Radar Imaging techniques provides extensive applications in the field of remote sensing, majorly Synthetic Aperture Radar(SAR) that provide high resolution target images. This paper work puts forward the effective and realizable signal generation and processing for SAR images. The major units in the system include camera, signal generation unit, signal processing unit and display screen. The real radio channel is replaced by its mathematical model based on optical image to calculate a reflected signal model in real time. Signal generation realizes the algorithm and forms the radar reflection model. Signal processing unit provides range and azimuth resolution through matched filtering and spectrum analysis procedure to form radar image on the display screen. The restored image has the same quality as that of the optical image. This SAR imaging system has been designed and implemented using MATLAB and Quartus II tools on Stratix III device as a System(lab bench) that works in real time to study/investigate on radar imaging rudiments and signal processing scheme for educational and research purposes.Keywords: synthetic aperture radar, radio reflection model, lab bench
Procedia PDF Downloads 4685570 Spatial Rank-Based High-Dimensional Monitoring through Random Projection
Authors: Chen Zhang, Nan Chen
Abstract:
High-dimensional process monitoring becomes increasingly important in many application domains, where usually the process distribution is unknown and much more complicated than the normal distribution, and the between-stream correlation can not be neglected. However, since the process dimension is generally much bigger than the reference sample size, most traditional nonparametric multivariate control charts fail in high-dimensional cases due to the curse of dimensionality. Furthermore, when the process goes out of control, the influenced variables are quite sparse compared with the whole dimension, which increases the detection difficulty. Targeting at these issues, this paper proposes a new nonparametric monitoring scheme for high-dimensional processes. This scheme first projects the high-dimensional process into several subprocesses using random projections for dimension reduction. Then, for every subprocess with the dimension much smaller than the reference sample size, a local nonparametric control chart is constructed based on the spatial rank test to detect changes in this subprocess. Finally, the results of all the local charts are fused together for decision. Furthermore, after an out-of-control (OC) alarm is triggered, a diagnostic framework is proposed. using the square-root LASSO. Numerical studies demonstrate that the chart has satisfactory detection power for sparse OC changes and robust performance for non-normally distributed data, The diagnostic framework is also effective to identify truly changed variables. Finally, a real-data example is presented to demonstrate the application of the proposed method.Keywords: random projection, high-dimensional process control, spatial rank, sequential change detection
Procedia PDF Downloads 2995569 A Two-Stage Bayesian Variable Selection Method with the Extension of Lasso for Geo-Referenced Data
Authors: Georgiana Onicescu, Yuqian Shen
Abstract:
Due to the complex nature of geo-referenced data, multicollinearity of the risk factors in public health spatial studies is a commonly encountered issue, which leads to low parameter estimation accuracy because it inflates the variance in the regression analysis. To address this issue, we proposed a two-stage variable selection method by extending the least absolute shrinkage and selection operator (Lasso) to the Bayesian spatial setting, investigating the impact of risk factors to health outcomes. Specifically, in stage I, we performed the variable selection using Bayesian Lasso and several other variable selection approaches. Then, in stage II, we performed the model selection with only the selected variables from stage I and compared again the methods. To evaluate the performance of the two-stage variable selection methods, we conducted a simulation study with different distributions for the risk factors, using geo-referenced count data as the outcome and Michigan as the research region. We considered the cases when all candidate risk factors are independently normally distributed, or follow a multivariate normal distribution with different correlation levels. Two other Bayesian variable selection methods, Binary indicator, and the combination of Binary indicator and Lasso were considered and compared as alternative methods. The simulation results indicated that the proposed two-stage Bayesian Lasso variable selection method has the best performance for both independent and dependent cases considered. When compared with the one-stage approach, and the other two alternative methods, the two-stage Bayesian Lasso approach provides the highest estimation accuracy in all scenarios considered.Keywords: Lasso, Bayesian analysis, spatial analysis, variable selection
Procedia PDF Downloads 1445568 Volunteered Geographic Information Coupled with Wildfire Fire Progression Maps: A Spatial and Temporal Tool for Incident Storytelling
Authors: Cassandra Hansen, Paul Doherty, Chris Ferner, German Whitley, Holly Torpey
Abstract:
Wildfire is a natural and inevitable occurrence, yet changing climatic conditions have increased the severity, frequency, and risk to human populations in the wildland/urban interface (WUI) of the Western United States. Rapid dissemination of accurate wildfire information is critical to both the Incident Management Team (IMT) and the affected community. With the advent of increasingly sophisticated information systems, GIS can now be used as a web platform for sharing geographic information in new and innovative ways, such as virtual story map applications. Crowdsourced information can be extraordinarily useful when coupled with authoritative information. Information abounds in the form of social media, emergency alerts, radio, and news outlets, yet many of these resources lack a spatial component when first distributed. In this study, we describe how twenty-eight volunteer GIS professionals across nine Geographic Area Coordination Centers (GACC) sourced, curated, and distributed Volunteered Geographic Information (VGI) from authoritative social media accounts focused on disseminating information about wildfires and public safety. The combination of fire progression maps with VGI incident information helps answer three critical questions about an incident, such as: where the first started. How and why the fire behaved in an extreme manner and how we can learn from the fire incident's story to respond and prepare for future fires in this area. By adding a spatial component to that shared information, this team has been able to visualize shared information about wildfire starts in an interactive map that answers three critical questions in a more intuitive way. Additionally, long-term social and technical impacts on communities are examined in relation to situational awareness of the disaster through map layers and agency links, the number of views in a particular region of a disaster, community involvement and sharing of this critical resource. Combined with a GIS platform and disaster VGI applications, this workflow and information become invaluable to communities within the WUI and bring spatial awareness for disaster preparedness, response, mitigation, and recovery. This study highlights progression maps as the ultimate storytelling mechanism through incident case studies and demonstrates the impact of VGI and sophisticated applied cartographic methodology make this an indispensable resource for authoritative information sharing.Keywords: storytelling, wildfire progression maps, volunteered geographic information, spatial and temporal
Procedia PDF Downloads 1765567 Combining ASTER Thermal Data and Spatial-Based Insolation Model for Identification of Geothermal Active Areas
Authors: Khalid Hussein, Waleed Abdalati, Pakorn Petchprayoon, Khaula Alkaabi
Abstract:
In this study, we integrated ASTER thermal data with an area-based spatial insolation model to identify and delineate geothermally active areas in Yellowstone National Park (YNP). Two pairs of L1B ASTER day- and nighttime scenes were used to calculate land surface temperature. We employed the Emissivity Normalization Algorithm which separates temperature from emissivity to calculate surface temperature. We calculated the incoming solar radiation for the area covered by each of the four ASTER scenes using an insolation model and used this information to compute temperature due to solar radiation. We then identified the statistical thermal anomalies using land surface temperature and the residuals calculated from modeled temperatures and ASTER-derived surface temperatures. Areas that had temperatures or temperature residuals greater than 2σ and between 1σ and 2σ were considered ASTER-modeled thermal anomalies. The areas identified as thermal anomalies were in strong agreement with the thermal areas obtained from the YNP GIS database. Also the YNP hot springs and geysers were located within areas identified as anomalous thermal areas. The consistency between our results and known geothermally active areas indicate that thermal remote sensing data, integrated with a spatial-based insolation model, provides an effective means for identifying and locating areas of geothermal activities over large areas and rough terrain.Keywords: thermal remote sensing, insolation model, land surface temperature, geothermal anomalies
Procedia PDF Downloads 3715566 Spatial Variation of Nitrogen, Phosphorus and Potassium Contents of Tomato (Solanum lycopersicum L.) Plants Grown in Greenhouses (Springs) in Elmali-Antalya Region
Authors: Namik Kemal Sonmez, Sahriye Sonmez, Hasan Rasit Turkkan, Hatice Tuba Selcuk
Abstract:
In this study, the spatial variation of plant and soil nutrition contents of tomato plants grown in greenhouses was investigated in Elmalı region of Antalya. For this purpose, total of 19 sampling points were determined. Coordinates of each sampling points were recorded by using a hand-held GPS device and were transferred to satellite data in GIS. Soil samples were collected from two different depths, 0-20 and 20-40 cm, and leaf were taken from different tomato greenhouses. The soil and plant samples were analyzed for N, P and K. Then, attribute tables were created with the analyses results by using GIS. Data were analyzed and semivariogram models and parameters (nugget, sill and range) of variables were determined by using GIS software. Kriged maps of variables were created by using nugget, sill and range values with geostatistical extension of ArcGIS software. Kriged maps of the N, P and K contents of plant and soil samples showed patchy or a relatively smooth distribution in the study areas. As a result, the N content of plants were sufficient approximately 66% portion of the tomato productions. It was determined that the P and K contents were sufficient of 70% and 80% portion of the areas, respectively. On the other hand, soil total K contents were generally adequate and available N and P contents were found to be highly good enough in two depths (0-20 and 20-40 cm) 90% portion of the areas.Keywords: Elmali, nutrients, springs greenhouses, spatial variation, tomato
Procedia PDF Downloads 2435565 The BNCT Project Using the Cf-252 Source: Monte Carlo Simulations
Authors: Marta Błażkiewicz-Mazurek, Adam Konefał
Abstract:
The project can be divided into three main parts: i. modeling the Cf-252 neutron source and conducting an experiment to verify the correctness of the obtained results, ii. design of the BNCT system infrastructure, iii. analysis of the results from the logical detector. Modeling of the Cf-252 source included designing the shape and size of the source as well as the energy and spatial distribution of emitted neutrons. Two options were considered: a point source and a cylindrical spatial source. The energy distribution corresponded to various spectra taken from specialized literature. Directionally isotropic neutron emission was simulated. The simulation results were compared with experimental values determined using the activation detector method using indium foils and cadmium shields. The relative fluence rate of thermal and resonance neutrons was compared in the chosen places in the vicinity of the source. The second part of the project related to the modeling of the BNCT infrastructure consisted of developing a simulation program taking into account all the essential components of this system. Materials with moderating, absorbing, and backscattering properties of neutrons were adopted into the project. Additionally, a gamma radiation filter was introduced into the beam output system. The analysis of the simulation results obtained using a logical detector located at the beam exit from the BNCT infrastructure included neutron energy and their spatial distribution. Optimization of the system involved changing the size and materials of the system to obtain a suitable collimated beam of thermal neutrons.Keywords: BNCT, Monte Carlo, neutrons, simulation, modeling
Procedia PDF Downloads 315564 Improving Cleanability by Changing Fish Processing Equipment Design
Authors: Lars A. L. Giske, Ola J. Mork, Emil Bjoerlykhaug
Abstract:
The design of fish processing equipment greatly impacts how easy the cleaning process for the equipment is. This is a critical issue in fish processing, as cleaning of fish processing equipment is a task that is both costly and time consuming, in addition to being very important with regards to product quality. Even more, poorly cleaned equipment could in the worst case lead to contaminated product from which consumers could get ill. This paper will elucidate how equipment design changes could improve the work for the cleaners and saving money for the fish processing facilities by looking at a case for product design improvements. The design of fish processing equipment largely determines how easy it is to clean. “Design for cleaning” is the new hype in the industry and equipment where the ease of cleaning is prioritized gets a competitive advantage over equipment in which design for cleaning has not been prioritized. Design for cleaning is an important research area for equipment manufacturers. SeaSide AS is doing continuously improvements in the design of their products in order to gain a competitive advantage. The focus in this paper will be conveyors for internal logistic and a product called the “electro stunner” will be studied with regards to “Design for cleaning”. Often together with SeaSide’s customers, ideas for new products or product improvements are sketched out, 3D-modelled, discussed, revised, built and delivered. Feedback from the customers is taken into consideration, and the product design is revised once again. This loop was repeated multiple times, and led to new product designs. The new designs sometimes also cause the manufacturing processes to change (as in going from bolted to welded connections). Customers report back that the concrete changes applied to products by SeaSide has resulted in overall more easily cleaned equipment. These changes include, but are not limited to; welded connections (opposed to bolted connections), gaps between contact faces, opening up structures to allow cleaning “inside” equipment, and generally avoiding areas in which humidity and water may gather and build up. This is important, as there will always be bacteria in the water which will grow if the area never dries up. The work of creating more cleanable design is still ongoing, and will “never” be finished as new designs and new equipment will have their own challenges.Keywords: cleaning, design, equipment, fish processing, innovation
Procedia PDF Downloads 2375563 Cricket Shot Recognition using Conditional Directed Spatial-Temporal Graph Networks
Authors: Tanu Aneja, Harsha Malaviya
Abstract:
Capturing pose information in cricket shots poses several challenges, such as low-resolution videos, noisy data, and joint occlusions caused by the nature of the shots. In response to these challenges, we propose a CondDGConv-based framework specifically for cricket shot prediction. By analyzing the spatial-temporal relationships in batsman shot sequences from an annotated 2D cricket dataset, our model achieves a 97% accuracy in predicting shot types. This performance is made possible by conditioning the graph network on batsman 2D poses, allowing for precise prediction of shot outcomes based on pose dynamics. Our approach highlights the potential for enhancing shot prediction in cricket analytics, offering a robust solution for overcoming pose-related challenges in sports analysis.Keywords: action recognition, cricket. sports video analytics, computer vision, graph convolutional networks
Procedia PDF Downloads 185562 Effect of Local Processing Techniques on the Nutrients and Anti-Nutrients Content of Bitter Cassava (Manihot Esculenta Crantz)
Authors: J. S. Alakali, A. R. Ismaila, T. G. Atume
Abstract:
The effects of local processing techniques on the nutrients and anti-nutrients content of bitter cassava were investigated. Raw bitter cassava tubers were boiled, sundried, roasted, fried to produce Kuese, partially fermented and sun dried to produce Alubo, fermented by submersion to produce Akpu and fermented by solid state to produce yellow and white gari. These locally processed cassava products were subjected to proximate, mineral analysis and anti-nutrient analysis using standard methods. The result of the proximate analysis showed that, raw bitter cassava is composed of 1.85% ash, 20.38% moisture, 4.11% crude fibre, 1.03% crude protein, 0.66% lipids and 71.88% total carbohydrate. For the mineral analysis, the raw bitter cassava tuber contained 32.00% Calcium, 12.55% Magnesium, 1.38% Iron and 80.17% Phosphorous. Even though all processing techniques significantly increased the mineral content, fermentation had higher mineral increment effect. The anti-nutrients analysis showed that the raw tuber contained 98.16mg/100g cyanide, 44.00mg/100g oxalate 304.20mg/100g phytate and 73.00mg/100g saponin. In general all the processing techniques showed a significant reduction of the phytate, oxalate and saponin content of the cassava. However, only fermentation, sun drying and gasification were able to reduce the cyanide content of bitter cassava below the safe level (10mg/100g) recommended by Standard Organization of Nigeria. Yellow gari(with the addition of palm oil) showed low cyanide content (1.10 mg/100g) than white gari (3.51 mg/100g). Processing methods involving fermentation reduce cyanide and other anti-nutrients in the cassava to levels that are safe for consumption and should be widely practiced.Keywords: bitter cassava, local processing, fermentation, anti-nutrient.
Procedia PDF Downloads 3045561 Multi-Scale Damage Modelling for Microstructure Dependent Short Fiber Reinforced Composite Structure Design
Authors: Joseph Fitoussi, Mohammadali Shirinbayan, Abbas Tcharkhtchi
Abstract:
Due to material flow during processing, short fiber reinforced composites structures obtained by injection or compression molding generally present strong spatial microstructure variation. On the other hand, quasi-static, dynamic, and fatigue behavior of these materials are highly dependent on microstructure parameters such as fiber orientation distribution. Indeed, because of complex damage mechanisms, SFRC structures design is a key challenge for safety and reliability. In this paper, we propose a micromechanical model allowing prediction of damage behavior of real structures as a function of microstructure spatial distribution. To this aim, a statistical damage criterion including strain rate and fatigue effect at the local scale is introduced into a Mori and Tanaka model. A critical local damage state is identified, allowing fatigue life prediction. Moreover, the multi-scale model is coupled with an experimental intrinsic link between damage under monotonic loading and fatigue life in order to build an abacus giving Tsai-Wu failure criterion parameters as a function of microstructure and targeted fatigue life. On the other hand, the micromechanical damage model gives access to the evolution of the anisotropic stiffness tensor of SFRC submitted to complex thermomechanical loading, including quasi-static, dynamic, and cyclic loading with temperature and amplitude variations. Then, the latter is used to fill out microstructure dependent material cards in finite element analysis for design optimization in the case of complex loading history. The proposed methodology is illustrated in the case of a real automotive component made of sheet molding compound (PSA 3008 tailgate). The obtained results emphasize how the proposed micromechanical methodology opens a new path for the automotive industry to lighten vehicle bodies and thereby save energy and reduce gas emission.Keywords: short fiber reinforced composite, structural design, damage, micromechanical modelling, fatigue, strain rate effect
Procedia PDF Downloads 1075560 Measuring Urban Sprawl in the Western Cape Province, South Africa: An Urban Sprawl Index for Comparative Purposes
Authors: Anele Horn, Amanda Van Eeden
Abstract:
The emphasis on the challenges posed by continued urbanisation, especially in developing countries has resulted in urban sprawl often researched and analysed in metropolitan urban areas, but rarely in small and medium towns. Consequently, there exists no comparative instrument between the proportional extent of urban sprawl in metropolitan areas measured against that of small and medium towns. This research proposes an Urban Sprawl Index as a possible tool to comparatively analyse the extent of urban sprawl between cities and towns of different sizes. The index can also be used over the longer term by authorities developing spatial policy to track the success or failure of specific tools intended to curb urban sprawl. In South Africa, as elsewhere in the world, the last two decades witnessed a proliferation of legislation and spatial policies to limit urban sprawl and contain the physical expansion and development of urban areas, but the measurement of the successes or failures of these instruments intending to curb expansive land development has remained a largely unattainable goal, largely as a result of the absence of an appropriate measure of proportionate comparison. As a result of the spatial political history of Apartheid, urban areas acquired a spatial form that contributed to the formation of single-core cities with far reaching and wide-spreading peripheral development, either in the form of affluent suburbs or as a result of post-Apartheid programmes such as the Reconstruction and Development Programme (1995) which, in an attempt to assist the immediate housing shortage, favoured the establishment of single dwelling residential units for low income communities on single plots on affordable land at the urban periphery. This invariably contributed to urban sprawl and even though this programme has since been abandoned, the trend towards low density residential development continues. The research area is the Western Cape Province in South Africa, which in all aspects exhibit the spatial challenges described above. In academia and popular media the City of Cape Town (the only Metropolitan authority in the province) has received the lion’s share of focus in terms of critique on urban development and spatial planning, however, the smaller towns and cities in the Western Cape arguably received much less public attention and were spared the naming and shaming of being unsustainable urban areas in terms of land consumption and physical expansion. The Urban Sprawl Index for the Western Cape (USIWC) put forward by this research enables local authorities in the Western Cape Province to measure the extent of urban sprawl proportionately and comparatively to other cities in the province, thereby acquiring a means of measuring the success of the spatial instruments employed to limit urban expansion and inefficient land consumption. In development of the USIWC the research made use of satellite data for reference years 2001 and 2011 and population growth data extracted from the national census, also for base years 2001 and 2011.Keywords: urban sprawl, index, Western Cape, South Africa
Procedia PDF Downloads 3295559 Multi-Scale Spatial Difference Analysis Based on Nighttime Lighting Data
Authors: Qinke Sun, Liang Zhou
Abstract:
The ‘Dragon-Elephant Debate’ between China and India is an important manifestation of global multipolarity in the 21st century. The two rising powers have carried out economic reforms one after another in the interval of more than ten years, becoming the fastest growing developing country and emerging economy in the world. At the same time, the development differences between China and India have gradually attracted wide attention of scholars. Based on the continuous annual night light data (DMSP-OLS) from 1992 to 2012, this paper systematically compares and analyses the regional development differences between China and India by Gini coefficient, coefficient of variation, comprehensive night light index (CNLI) and hot spot analysis. The results show that: (1) China's overall expansion from 1992 to 2012 is 1.84 times that of India, in which China's change is 2.6 times and India's change is 2 times. The percentage of lights in unlighted areas in China dropped from 92% to 82%, while that in India from 71% to 50%. (2) China's new growth-oriented cities appear in Hohhot, Inner Mongolia, Ordos, and Urumqi in the west, and the declining cities are concentrated in Liaoning Province and Jilin Province in the northeast; India's new growth-oriented cities are concentrated in Chhattisgarh in the north, while the declining areas are distributed in Uttar Pradesh. (3) China's differences on different scales are lower than India's, and regional inequality of development is gradually narrowing. Gini coefficients at the regional and provincial levels have decreased from 0.29, 0.44 to 0.24 and 0.38, respectively, while regional inequality in India has slowly improved and regional differences are gradually widening, with Gini coefficients rising from 0.28 to 0.32. The provincial Gini coefficient decreased slightly from 0.64 to 0.63. (4) The spatial pattern of China's regional development is mainly east-west difference, which shows the difference between coastal and inland areas; while the spatial pattern of India's regional development is mainly north-south difference, but because the southern states are sea-dependent, it also reflects the coastal inland difference to a certain extent. (5) Beijing and Shanghai present a multi-core outward expansion model, with an average annual CNLI higher than 0.01, while New Delhi and Mumbai present the main core enhancement expansion model, with an average annual CNLI lower than 0.01, of which the average annual CNLI in Shanghai is about five times that in Mumbai.Keywords: spatial pattern, spatial difference, DMSP-OLS, China, India
Procedia PDF Downloads 1555558 Optical Vortex in Asymmetric Arcs of Rotating Intensity
Authors: Mona Mihailescu, Rebeca Tudor, Irina A. Paun, Cristian Kusko, Eugen I. Scarlat, Mihai Kusko
Abstract:
Specific intensity distributions in the laser beams are required in many fields: optical communications, material processing, microscopy, optical tweezers. In optical communications, the information embedded in specific beams and the superposition of multiple beams can be used to increase the capacity of the communication channels, employing spatial modulation as an additional degree of freedom, besides already available polarization and wavelength multiplexing. In this regard, optical vortices present interest due to their potential to carry independent data which can be multiplexed at the transmitter and demultiplexed at the receiver. Also, in the literature were studied their combinations: 1) axial or perpendicular superposition of multiple optical vortices or 2) with other laser beam types: Bessel, Airy. Optical vortices, characterized by stationary ring-shape intensity and rotating phase, are achieved using computer generated holograms (CGH) obtained by simulating the interference between a tilted plane wave and a wave passing through a helical phase object. Here, we propose a method to combine information through the reunion of two CGHs. One is obtained using the helical phase distribution, characterized by its topological charge, m. The other is obtained using conical phase distribution, characterized by its radial factor, r0. Each CGH is obtained using plane wave with different tilts: km and kr for CGH generated from helical phase object and from conical phase object, respectively. These reunions of two CGHs are calculated to be phase optical elements, addressed on the liquid crystal display of a spatial light modulator, to optically process the incident beam for investigations of the diffracted intensity pattern in far field. For parallel reunion of two CGHs and high values of the ratio between km and kr, the bright ring from the first diffraction order, specific for optical vortices, is changed in an asymmetric intensity pattern: a number of circle arcs. Both diffraction orders (+1 and -1) are asymmetrical relative to each other. In different planes along the optical axis, it is observed that this asymmetric intensity pattern rotates around its centre: in the +1 diffraction order the rotation is anticlockwise and in the -1 diffraction order, the rotation is clockwise. The relation between m and r0 controls the diameter of the circle arcs and the ratio between km and kr controls the number of arcs. For perpendicular reunion of the two CGHs and low values of the ratio between km and kr, the optical vortices are multiplied and focalized in different planes, depending on the radial parameter. The first diffraction order contains information about both phase objects. It is incident on the phase masks placed at the receiver, computed using the opposite values for topological charge or for the radial parameter and displayed successively. In all, the proposed method is exploited in terms of constructive parameters, for the possibility offered by the combination of different types of beams which can be used in robust optical communications.Keywords: asymmetrical diffraction orders, computer generated holograms, conical phase distribution, optical vortices, spatial light modulator
Procedia PDF Downloads 3115557 Statistical Investigation Projects: A Way for Pre-Service Mathematics Teachers to Actively Solve a Campus Problem
Authors: Muhammet Şahal, Oğuz Köklü
Abstract:
As statistical thinking and problem-solving processes have become increasingly important, teachers need to be more rigorously prepared with statistical knowledge to teach their students effectively. This study examined preservice mathematics teachers' development of statistical investigation projects using data and exploratory data analysis tools, following a design-based research perspective and statistical investigation cycle. A total of 26 pre-service senior mathematics teachers from a public university in Turkiye participated in the study. They formed groups of 3-4 members voluntarily and worked on their statistical investigation projects for six weeks. The data sources were audio recordings of pre-service teachers' group discussions while working on their projects in class, whole-class video recordings, and each group’s weekly and final reports. As part of the study, we reviewed weekly reports, provided timely feedback specific to each group, and revised the following week's class work based on the groups’ needs and development in their project. We used content analysis to analyze groups’ audio and classroom video recordings. The participants encountered several difficulties, which included formulating a meaningful statistical question in the early phase of the investigation, securing the most suitable data collection strategy, and deciding on the data analysis method appropriate for their statistical questions. The data collection and organization processes were challenging for some groups and revealed the importance of comprehensive planning. Overall, preservice senior mathematics teachers were able to work on a statistical project that contained the formulation of a statistical question, planning, data collection, analysis, and reaching a conclusion holistically, even though they faced challenges because of their lack of experience. The study suggests that preservice senior mathematics teachers have the potential to apply statistical knowledge and techniques in a real-world context, and they could proceed with the project with the support of the researchers. We provided implications for the statistical education of teachers and future research.Keywords: design-based study, pre-service mathematics teachers, statistical investigation projects, statistical model
Procedia PDF Downloads 855556 Increasing a Computer Performance by Overclocking Central Processing Unit (CPU)
Authors: Witthaya Mekhum, Wutthikorn Malikong
Abstract:
The objective of this study is to investigate the increasing desktop computer performance after overclocking central processing unit or CPU by running a computer component at a higher clock rate (more clock cycles per second) than it was designed at the rate of 0.1 GHz for each level or 100 MHz starting at 4000 GHz-4500 GHz. The computer performance is tested for each level with 4 programs, i.e. Hyper PI ver. 0.99b, Cinebench R15, LinX ver.0.6.4 and WinRAR . After the CPU overclock, the computer performance increased. When overclocking CPU at 29% the computer performance tested by Hyper PI ver. 0.99b increased by 10.03% and when tested by Cinebench R15 the performance increased by 20.05% and when tested by LinX Program the performance increased by 16.61%. However, the performance increased only 8.14% when tested with Winrar program. The computer performance did not increase according to the overclock rate because the computer consists of many components such as Random Access Memory or RAM, Hard disk Drive, Motherboard and Display Card, etc.Keywords: overclock, performance, central processing unit, computer
Procedia PDF Downloads 2835555 Development of a Vacuum System for Orthopedic Drilling Processes and Determination of Optimal Processing Parameters for Temperature Control
Authors: Kadir Gök
Abstract:
In this study, a vacuum system was developed for orthopedic drilling processes, and the most efficient processing parameters were determined using statistical analysis of temperature rise. A reverse engineering technique was used to obtain a 3D model of the chip vacuum system, and the obtained point cloud data was transferred to Solidworks software in STL format. An experimental design method was performed by selecting different parameters and their levels, such as RPM, feed rate, and drill bit diameter, to determine the most efficient processing parameters in temperature rise using ANOVA. Additionally, the bone chip-vacuum device was developed and performed successfully to collect the whole chips and fragments in the bone drilling experimental tests, and the chip-collecting device was found to be useful in removing overheating from the drilling zone. The effects of processing parameters on the temperature levels during the chip-vacuuming were determined, and it was found that bone chips and fractures can be used as autograft and allograft for tissue engineering. Overall, this study provides significant insights into the development of a vacuum system for orthopedic drilling processes and the use of bone chips and fractures in tissue engineering applications.Keywords: vacuum system, orthopedic drilling, temperature rise, bone chips
Procedia PDF Downloads 985554 Mathematical Modelling of Spatial Distribution of Covid-19 Outbreak Using Diffusion Equation
Authors: Kayode Oshinubi, Brice Kammegne, Jacques Demongeot
Abstract:
The use of mathematical tools like Partial Differential Equations and Ordinary Differential Equations have become very important to predict the evolution of a viral disease in a population in order to take preventive and curative measures. In December 2019, a novel variety of Coronavirus (SARS-CoV-2) was identified in Wuhan, Hubei Province, China causing a severe and potentially fatal respiratory syndrome, i.e., COVID-19. Since then, it has become a pandemic declared by World Health Organization (WHO) on March 11, 2020 which has spread around the globe. A reaction-diffusion system is a mathematical model that describes the evolution of a phenomenon subjected to two processes: a reaction process in which different substances are transformed, and a diffusion process that causes a distribution in space. This article provides a mathematical study of the Susceptible, Exposed, Infected, Recovered, and Vaccinated population model of the COVID-19 pandemic by the bias of reaction-diffusion equations. Both local and global asymptotic stability conditions for disease-free and endemic equilibria are determined using the Lyapunov function are considered and the endemic equilibrium point exists and is stable if it satisfies Routh–Hurwitz criteria. Also, adequate conditions for the existence and uniqueness of the solution of the model have been proved. We showed the spatial distribution of the model compartments when the basic reproduction rate $\mathcal{R}_0 < 1$ and $\mathcal{R}_0 > 1$ and sensitivity analysis is performed in order to determine the most sensitive parameters in the proposed model. We demonstrate the model's effectiveness by performing numerical simulations. We investigate the impact of vaccination and the significance of spatial distribution parameters in the spread of COVID-19. The findings indicate that reducing contact with an infected person and increasing the proportion of susceptible people who receive high-efficacy vaccination will lessen the burden of COVID-19 in the population. To the public health policymakers, we offered a better understanding of the COVID-19 management.Keywords: COVID-19, SEIRV epidemic model, reaction-diffusion equation, basic reproduction number, vaccination, spatial distribution
Procedia PDF Downloads 1235553 Enhancing Food Quality and Safety Management in Ethiopia's Food Processing Industry: Challenges, Causes, and Solutions
Authors: Tuji Jemal Ahmed
Abstract:
Food quality and safety challenges are prevalent in Ethiopia's food processing industry, which can have adverse effects on consumers' health and wellbeing. The country is known for its diverse range of agricultural products, which are essential to its economy. However, poor food quality and safety policies and management systems in the food processing industry have led to several health problems, foodborne illnesses, and economic losses. This paper aims to highlight the causes and effects of food safety and quality issues in the food processing industry of Ethiopia and discuss potential solutions to address these issues. One of the main causes of poor food quality and safety in Ethiopia's food processing industry is the lack of adequate regulations and enforcement mechanisms. The absence of comprehensive food safety and quality policies and guidelines has led to substandard practices in the food manufacturing process. Moreover, the lack of monitoring and enforcement of existing regulations has created a conducive environment for unscrupulous businesses to engage in unsafe practices that endanger the public's health. The effects of poor food quality and safety are significant, ranging from the loss of human lives, increased healthcare costs, and loss of consumer confidence in the food processing industry. Foodborne illnesses, such as diarrhea, typhoid fever, and cholera, are prevalent in Ethiopia, and poor food quality and safety practices contribute significantly to their prevalence. Additionally, food recalls due to contamination or mislabeling often result in significant economic losses for businesses in the food processing industry. To address these challenges, the Ethiopian government has begun to take steps to improve food quality and safety in the food processing industry. One of the most notable initiatives is the Ethiopian Food and Drug Administration (EFDA), which was established in 2010 to regulate and monitor the quality and safety of food and drug products in the country. The EFDA has implemented several measures to enhance food safety, such as conducting routine inspections, monitoring the importation of food products, and enforcing strict labeling requirements. Another potential solution to improve food quality and safety in Ethiopia's food processing industry is the implementation of food safety management systems (FSMS). An FSMS is a set of procedures and policies designed to identify, assess, and control food safety hazards throughout the food manufacturing process. Implementing an FSMS can help businesses in the food processing industry identify and address potential hazards before they cause harm to consumers. Additionally, the implementation of an FSMS can help businesses comply with existing food safety regulations and guidelines. In conclusion, improving food quality and safety policies and management systems in Ethiopia's food processing industry is critical to protecting public health and enhancing the country's economy. Addressing the root causes of poor food quality and safety and implementing effective solutions, such as the establishment of regulatory agencies and the implementation of food safety management systems, can help to improve the overall safety and quality of the country's food supply.Keywords: food quality, food safety, policy, management system, food processing industry
Procedia PDF Downloads 855552 Role of Maternal Astaxanthin Supplementation on Brain Derived Neurotrophic Factor and Spatial Learning Behavior in Wistar Rat Offspring’s
Authors: K. M. Damodara Gowda
Abstract:
Background: Maternal health and nutrition are considered as the predominant factors influencing brain functional development. If the mother is free of illness and genetic defects, maternal nutrition would be one of the most critical factors affecting the brain development. Calorie restrictions cause significant impairment in spatial learning ability and the levels of Brain Derived Neurotrophic Factor (BDNF) in rats. But, the mechanism by which the prenatal under-nutrition leads to impairment in brain learning and memory function is still unclear. In the present study, prenatal Astaxanthin supplementation on BDNF level, spatial learning and memory performance in the offspring’s of normal, calorie restricted and Astaxanthin supplemented rats was investigated. Methodology: The rats were administered with 6mg and 12 mg of astaxanthin /kg bw for 21 days following which acquisition and retention of spatial memory was tested in a partially-baited eight arm radial maze. The BDNF level in different regions of the brain (cerebral cortex, hippocampus and cerebellum) was estimated by ELISA method. Results: Calorie restricted animals treated with astaxanthin made significantly more correct choices (P < 0.05), and fewer reference memory errors (P < 0.05) on the tenth day of training compared to offsprings of calorie restricted animals. Calorie restricted animals treated with astaxanthin also made significantly higher correct choices (P < 0.001) than untreated calorie restricted animals in a retention test 10 days after the training period. The mean BDNF level in cerebral cortex, Hippocampus and cerebellum in Calorie restricted animals treated with astaxanthin didnot show significant variation from that of control animals. Conclusion: Findings of the study indicated that memory and learning was impaired in the offspring’s of calorie restricted rats which was effectively modulated by astaxanthin at the dosage of 12 mg/kg body weight. In the same way the BDNF level at cerebral cortex, Hippocampus and Cerebellum was also declined in the offspring’s of calorie restricted animals, which was also found to be effectively normalized by astaxanthin.Keywords: calorie restiction, learning, Memory, Cerebral cortex, Hippocampus, Cerebellum, BDNF, Astaxanthin
Procedia PDF Downloads 2325551 CRM Cloud Computing: An Efficient and Cost Effective Tool to Improve Customer Interactions
Authors: Gaurangi Saxena, Ravindra Saxena
Abstract:
Lately, cloud computing is used to enhance the ability to attain corporate goals more effectively and efficiently at lower cost. This new computing paradigm “The Cloud Computing” has emerged as a powerful tool for optimum utilization of resources and gaining competitiveness through cost reduction and achieving business goals with greater flexibility. Realizing the importance of this new technique, most of the well known companies in computer industry like Microsoft, IBM, Google and Apple are spending millions of dollars in researching cloud computing and investigating the possibility of producing interface hardware for cloud computing systems. It is believed that by using the right middleware, a cloud computing system can execute all the programs a normal computer could run. Potentially, everything from most simple generic word processing software to highly specialized and customized programs designed for specific company could work successfully on a cloud computing system. A Cloud is a pool of virtualized computer resources. Clouds are not limited to grid environments, but also support “interactive user-facing applications” such as web applications and three-tier architectures. Cloud Computing is not a fundamentally new paradigm. It draws on existing technologies and approaches, such as utility Computing, Software-as-a-service, distributed computing, and centralized data centers. Some companies rent physical space to store servers and databases because they don’t have it available on site. Cloud computing gives these companies the option of storing data on someone else’s hardware, removing the need for physical space on the front end. Prominent service providers like Amazon, Google, SUN, IBM, Oracle, Salesforce etc. are extending computing infrastructures and platforms as a core for providing top-level services for computation, storage, database and applications. Application services could be email, office applications, finance, video, audio and data processing. By using cloud computing system a company can improve its customer relationship management. A CRM cloud computing system may be highly useful in delivering a sales team a blend of unique functionalities to improve agent/customer interactions. This paper attempts to first define the cloud computing as a tool for running business activities more effectively and efficiently at a lower cost; and then it distinguishes cloud computing with grid computing. Based on exhaustive literature review, authors discuss application of cloud computing in different disciplines of management especially in the field of marketing with special reference to use of cloud computing in CRM. Study concludes that CRM cloud computing platform helps a company track any data, such as orders, discounts, references, competitors and many more. By using CRM cloud computing, companies can improve its customer interactions and by serving them more efficiently that too at a lower cost can help gaining competitive advantage.Keywords: cloud computing, competitive advantage, customer relationship management, grid computing
Procedia PDF Downloads 3125550 The Spatial and Temporal Distribution of Ambient Benzene, Toluene, Ethylbenzene and Xylene Concentrations at an International Airport in South Africa
Authors: Ryan S. Johnson, Raeesa Moolla
Abstract:
Airports are known air pollution hotspots due to the variety of fuel driven activities that take place within the confines of them. As such, people working within airports are particularly vulnerable to exposure of hazardous air pollutants, including hundreds of aromatic hydrocarbons, and more specifically a group of compounds known as BTEX (viz. benzene, toluene, ethyl-benzene and xylenes). These compounds have been identified as being harmful to human and environmental health. Through the use of passive and active sampling methods, the spatial and temporal variability of benzene, toluene, ethyl-benzene and xylene concentrations within the international airport was investigated. Two sampling campaigns were conducted. In order to quantify the temporal variability of concentrations within the airport, an active sampling strategy using the Synspec Spectras Gas Chromatography 955 instrument was used. Furthermore, a passive sampling campaign, using Radiello Passive Samplers was used to quantify the spatial variability of these compounds. In addition, meteorological factors are known to affect the dispersal and dilution of pollution. Thus a Davis Pro-Weather 2 station was utilised in order to measure in situ weather parameters (viz. wind speed, wind direction and temperature). Results indicated that toluene varied on a daily, temporal scale considerably more than other concentrations. Toluene further exhibited a strong correlation with regards to the meteorological parameters, inferring that toluene was affected by these parameters to a greater degree than the other pollutants. The passive sampling campaign revealed BTEXtotal concentrations ranged between 12.95 – 124.04 µg m-3. From the results obtained it is clear that benzene, toluene, ethyl-benzene and xylene concentrations are heterogeneously spatially dispersed within the airport. Due to the slow wind speeds recorded over the passive sampling campaign (1.13 m s-1.), the hotspots were located close to the main concentration sources. The most significant hotspot was located over the main apron of the airport. It is recommended that further, extensive investigations into the seasonality of hazardous air pollutants at the airport is necessary in order for sound conclusions to be made about the temporal and spatial distribution of benzene, toluene, ethyl-benzene and xylene concentrations within the airport.Keywords: airport, air pollution hotspot, BTEX concentrations, meteorology
Procedia PDF Downloads 2045549 Spatial Analysis of Flood Vulnerability in Highly Urbanized Area: A Case Study in Taipei City
Authors: Liang Weichien
Abstract:
Without adequate information and mitigation plan for natural disaster, the risk to urban populated areas will increase in the future as populations grow, especially in Taiwan. Taiwan is recognized as the world's high-risk areas, where an average of 5.7 times of floods occur per year should seek to strengthen coherence and consensus in how cities can plan for flood and climate change. Therefore, this study aims at understanding the vulnerability to flooding in Taipei city, Taiwan, by creating indicators and calculating the vulnerability of each study units. The indicators were grouped into sensitivity and adaptive capacity based on the definition of vulnerability of Intergovernmental Panel on Climate Change. The indicators were weighted by using Principal Component Analysis. However, current researches were based on the assumption that the composition and influence of the indicators were the same in different areas. This disregarded spatial correlation that might result in inaccurate explanation on local vulnerability. The study used Geographically Weighted Principal Component Analysis by adding geographic weighting matrix as weighting to get the different main flood impact characteristic in different areas. Cross Validation Method and Akaike Information Criterion were used to decide bandwidth and Gaussian Pattern as the bandwidth weight scheme. The ultimate outcome can be used for the reduction of damage potential by integrating the outputs into local mitigation plan and urban planning.Keywords: flood vulnerability, geographically weighted principal components analysis, GWPCA, highly urbanized area, spatial correlation
Procedia PDF Downloads 286