Search results for: linear congruential algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6486

Search results for: linear congruential algorithm

246 One Pot Synthesis of Cu–Ni–S/Ni Foam for the Simultaneous Removal and Detection of Norfloxacin

Authors: Xincheng Jiang, Yanyan An, Yaoyao Huang, Wei Ding, Manli Sun, Hong Li, Huaili Zheng

Abstract:

The residual antibiotics in the environment will pose a threat to the environment and human health. Thus, efficient removal and rapid detection of norfloxacin (NOR) in wastewater is very important. The main sources of NOR pollution are the agricultural, pharmaceutical industry and hospital wastewater. The total consumption of NOR in China can reach 5440 tons per year. It is found that neither animals nor humans can totally absorb and metabolize NOR, resulting in the excretion of NOR into the environment. Therefore, residual NOR has been detected in water bodies. The hazards of NOR in wastewater lie in three aspects: (1) the removal capacity of the wastewater treatment plant for NOR is limited (it is reported that the average removal efficiency of NOR in the wastewater treatment plant is only 68%); (2) NOR entering the environment will lead to the emergence of drug-resistant strains; (3) NOR is toxic to many aquatic species. At present, the removal and detection technologies of NOR are applied separately, which leads to a cumbersome operation process. The development of simultaneous adsorption-flocculation removal and FTIR detection of pollutants has three advantages: (1) Adsorption-flocculation technology promotes the detection technology (the enrichment effect on the material surface improves the detection ability); (2) The integration of adsorption-flocculation technology and detection technology reduces the material cost and makes the operation easier; (3) FTIR detection technology endows the water treatment agent with the ability of molecular recognition and semi-quantitative detection for pollutants. Thus, it is of great significance to develop a smart water treatment material with high removal capacity and detection ability for pollutants. This study explored the feasibility of combining NOR removal method with the semi-quantitative detection method. A magnetic Cu-Ni-S/Ni foam was synthesized by in-situ loading Cu-Ni-S nanostructures on the surface of Ni foam. The novelty of this material is the combination of adsorption-flocculation technology and semi-quantitative detection technology. Batch experiments showed that Cu-Ni-S/Ni foam has a high removal rate of NOR (96.92%), wide pH adaptability (pH=4.0-10.0) and strong ion interference resistance (0.1-100 mmol/L). According to the Langmuir fitting model, the removal capacity can reach 417.4 mg/g at 25 °C, which is much higher than that of other water treatment agents reported in most studies. Characterization analysis indicated that the main removal mechanisms are surface complexation, cation bridging, electrostatic attraction, precipitation and flocculation. Transmission FTIR detection experiments showed that NOR on Cu-Ni-S/Ni foam has easily recognizable FTIR fingerprints; the intensity of characteristic peaks roughly reflects the concentration information to some extent. This semi-quantitative detection method has a wide linear range (5-100 mg/L) and a low limit of detection (4.6 mg/L). These results show that Cu-Ni-S/Ni foam has excellent removal performance and semi-quantitative detection ability of NOR molecules. This paper provides a new idea for designing and preparing multi-functional water treatment materials to achieve simultaneous removal and semi-quantitative detection of organic pollutants in water.

Keywords: adsorption-flocculation, antibiotics detection, Cu-Ni-S/Ni foam, norfloxacin

Procedia PDF Downloads 48
245 Role of Artificial Intelligence in Nano Proteomics

Authors: Mehrnaz Mostafavi

Abstract:

Recent advances in single-molecule protein identification (ID) and quantification techniques are poised to revolutionize proteomics, enabling researchers to delve into single-cell proteomics and identify low-abundance proteins crucial for biomedical and clinical research. This paper introduces a different approach to single-molecule protein ID and quantification using tri-color amino acid tags and a plasmonic nanopore device. A comprehensive simulator incorporating various physical phenomena was designed to predict and model the device's behavior under diverse experimental conditions, providing insights into its feasibility and limitations. The study employs a whole-proteome single-molecule identification algorithm based on convolutional neural networks, achieving high accuracies (>90%), particularly in challenging conditions (95–97%). To address potential challenges in clinical samples, where post-translational modifications affecting labeling efficiency, the paper evaluates protein identification accuracy under partial labeling conditions. Solid-state nanopores, capable of processing tens of individual proteins per second, are explored as a platform for this method. Unlike techniques relying solely on ion-current measurements, this approach enables parallel readout using high-density nanopore arrays and multi-pixel single-photon sensors. Convolutional neural networks contribute to the method's versatility and robustness, simplifying calibration procedures and potentially allowing protein ID based on partial reads. The study also discusses the efficacy of the approach in real experimental conditions, resolving functionally similar proteins. The theoretical analysis, protein labeler program, finite difference time domain calculation of plasmonic fields, and simulation of nanopore-based optical sensing are detailed in the methods section. The study anticipates further exploration of temporal distributions of protein translocation dwell-times and the impact on convolutional neural network identification accuracy. Overall, the research presents a promising avenue for advancing single-molecule protein identification and quantification with broad applications in proteomics research. The contributions made in methodology, accuracy, robustness, and technological exploration collectively position this work at the forefront of transformative developments in the field.

Keywords: nano proteomics, nanopore-based optical sensing, deep learning, artificial intelligence

Procedia PDF Downloads 40
244 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 68
243 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model

Authors: Mohammad Zamani, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.

Keywords: circular vertical, spillway, numerical model, boundary conditions

Procedia PDF Downloads 53
242 Enhance Concurrent Design Approach through a Design Methodology Based on an Artificial Intelligence Framework: Guiding Group Decision Making to Balanced Preliminary Design Solution

Authors: Loris Franchi, Daniele Calvi, Sabrina Corpino

Abstract:

This paper presents a design methodology in which stakeholders are assisted with the exploration of a so-called negotiation space, aiming to the maximization of both group social welfare and single stakeholder’s perceived utility. The outcome results in less design iterations needed for design convergence while obtaining a higher solution effectiveness. During the early stage of a space project, not only the knowledge about the system but also the decision outcomes often are unknown. The scenario is exacerbated by the fact that decisions taken in this stage imply delayed costs associated with them. Hence, it is necessary to have a clear definition of the problem under analysis, especially in the initial definition. This can be obtained thanks to a robust generation and exploration of design alternatives. This process must consider that design usually involves various individuals, who take decisions affecting one another. An effective coordination among these decision-makers is critical. Finding mutual agreement solution will reduce the iterations involved in the design process. To handle this scenario, the paper proposes a design methodology which, aims to speed-up the process of pushing the mission’s concept maturity level. This push up is obtained thanks to a guided negotiation space exploration, which involves autonomously exploration and optimization of trade opportunities among stakeholders via Artificial Intelligence algorithms. The negotiation space is generated via a multidisciplinary collaborative optimization method, infused by game theory and multi-attribute utility theory. In particular, game theory is able to model the negotiation process to reach the equilibria among stakeholder needs. Because of the huge dimension of the negotiation space, a collaborative optimization framework with evolutionary algorithm has been integrated in order to guide the game process to efficiently and rapidly searching for the Pareto equilibria among stakeholders. At last, the concept of utility constituted the mechanism to bridge the language barrier between experts of different backgrounds and differing needs, using the elicited and modeled needs to evaluate a multitude of alternatives. To highlight the benefits of the proposed methodology, the paper presents the design of a CubeSat mission for the observation of lunar radiation environment. The derived solution results able to balance all stakeholders needs and guaranteeing the effectiveness of the selection mission concept thanks to its robustness in valuable changeability. The benefits provided by the proposed design methodology are highlighted, and further development proposed.

Keywords: concurrent engineering, artificial intelligence, negotiation in engineering design, multidisciplinary optimization

Procedia PDF Downloads 107
241 Quality Characteristics of Road Runoff in Coastal Zones: A Case Study in A25 Highway, Portugal

Authors: Pedro B. Antunes, Paulo J. Ramísio

Abstract:

Road runoff is a linear source of diffuse pollution that can cause significant environmental impacts. During rainfall events, pollutants from both stationary and mobile sources, which have accumulated on the road surface, are dragged through the superficial runoff. Road runoff in coastal zones may present high levels of salinity and chlorides due to the proximity of the sea and transported marine aerosols. Appearing to be correlated to this process, organic matter concentration may also be significant. This study assesses this phenomenon with the purpose of identifying the relationships between monitored water quality parameters and intrinsic site variables. To achieve this objective, an extensive monitoring program was conducted on a Portuguese coastal highway. The study included thirty rainfall events, in different weather, traffic and salt deposition conditions in a three years period. The evaluations of various water quality parameters were carried out in over 200 samples. In addition, the meteorological, hydrological and traffic parameters were continuously measured. The salt deposition rates (SDR) were determined by means of a wet candle device, which is an innovative feature of the monitoring program. The SDR, variable throughout the year, appears to show a high correlation with wind speed and direction, but mostly with wave propagation, so that it is lower in the summer, in spite of the favorable wind direction in the case study. The distance to the sea, topography, ground obstacles and the platform altitude seems to be also relevant. It was confirmed the high salinity in the runoff, increasing the concentration of the water quality parameters analyzed, with significant amounts of seawater features. In order to estimate the correlations and patterns of different water quality parameters and variables related to weather, road section and salt deposition, the study included exploratory data analysis using different techniques (e.g. Pearson correlation coefficients, Cluster Analysis and Principal Component Analysis), confirming some specific features of the investigated road runoff. Significant correlations among pollutants were observed. Organic matter was highlighted as very dependent of salinity. Indeed, data analysis showed that some important water quality parameters could be divided into two major clusters based on their correlations to salinity (including organic matter associated parameters) and total suspended solids (including some heavy metals). Furthermore, the concentrations of the most relevant pollutants seemed to be very dependent on some meteorological variables, particularly the duration of the antecedent dry period prior to each rainfall event and the average wind speed. Based on the results of a monitoring case study, in a coastal zone, it was proven that SDR, associated with the hydrological characteristics of road runoff, can contribute for a better knowledge of the runoff characteristics, and help to estimate the specific nature of the runoff and related water quality parameters.

Keywords: coastal zones, monitoring, road runoff pollution, salt deposition

Procedia PDF Downloads 213
240 Web and Smart Phone-based Platform Combining Artificial Intelligence and Satellite Remote Sensing Data to Geoenable Villages for Crop Health Monitoring

Authors: Siddhartha Khare, Nitish Kr Boro, Omm Animesh Mishra

Abstract:

Recent food price hikes may signal the end of an era of predictable global grain crop plenty due to climate change, population expansion, and dietary changes. Food consumption will treble in 20 years, requiring enormous production expenditures. Climate and the atmosphere changed owing to rainfall and seasonal cycles in the past decade. India's tropical agricultural relies on evapotranspiration and monsoons. In places with limited resources, the global environmental change affects agricultural productivity and farmers' capacity to adjust to changing moisture patterns. Motivated by these difficulties, satellite remote sensing might be combined with near-surface imaging data (smartphones, UAVs, and PhenoCams) to enable phenological monitoring and fast evaluations of field-level consequences of extreme weather events on smallholder agriculture output. To accomplish this technique, we must digitally map all communities agricultural boundaries and crop kinds. With the improvement of satellite remote sensing technologies, a geo-referenced database may be created for rural Indian agriculture fields. Using AI, we can design digital agricultural solutions for individual farms. Main objective is to Geo-enable each farm along with their seasonal crop information by combining Artificial Intelligence (AI) with satellite and near-surface data and then prepare long term crop monitoring through in-depth field analysis and scanning of fields with satellite derived vegetation indices. We developed an AI based algorithm to understand the timelapse based growth of vegetation using PhenoCam or Smartphone based images. We developed an android platform where user can collect images of their fields based on the android application. These images will be sent to our local server, and then further AI based processing will be done at our server. We are creating digital boundaries of individual farms and connecting these farms with our smart phone application to collect information about farmers and their crops in each season. We are extracting satellite-based information for each farm from Google earth engine APIs and merging this data with our data of tested crops from our app according to their farm’s locations and create a database which will provide the data of quality of crops from their location.

Keywords: artificial intelligence, satellite remote sensing, crop monitoring, android and web application

Procedia PDF Downloads 65
239 Induction Machine Design Method for Aerospace Starter/Generator Applications and Parametric FE Analysis

Authors: Wang Shuai, Su Rong, K. J.Tseng, V. Viswanathan, S. Ramakrishna

Abstract:

The More-Electric-Aircraft concept in aircraft industry levies an increasing demand on the embedded starter/generators (ESG). The high-speed and high-temperature environment within an engine poses great challenges to the operation of such machines. In view of such challenges, squirrel cage induction machines (SCIM) have shown advantages due to its simple rotor structure, absence of temperature-sensitive components as well as low torque ripples etc. The tight operation constraints arising from typical ESG applications together with the detailed operation principles of SCIMs have been exploited to derive the mathematical interpretation of the ESG-SCIM design process. The resultant non-linear mathematical treatment yielded unique solution to the SCIM design problem for each configuration of pole pair number p, slots/pole/phase q and conductors/slot zq, easily implemented via loop patterns. It was also found that not all configurations led to feasible solutions and corresponding observations have been elaborated. The developed mathematical procedures also proved an effective framework for optimization among electromagnetic, thermal and mechanical aspects by allocating corresponding degree-of-freedom variables. Detailed 3D FEM analysis has been conducted to validate the resultant machine performance against design specifications. To obtain higher power ratings, electrical machines often have to increase the slot areas for accommodating more windings. Since the available space for embedding such machines inside an engine is usually short in length, axial air gap arrangement appears more appealing compared to its radial gap counterpart. The aforementioned approach has been adopted in case studies of designing series of AFIMs and RFIMs respectively with increasing power ratings. Following observations have been obtained. Under the strict rotor diameter limitation AFIM extended axially for the increased slot areas while RFIM expanded radially with the same axial length. Beyond certain power ratings AFIM led to long cylinder geometry while RFIM topology resulted in the desired short disk shape. Besides the different dimension growth patterns, AFIMs and RFIMs also exhibited dissimilar performance degradations regarding power factor, torque ripples as well as rated slip along with increased power ratings. Parametric response curves were plotted to better illustrate the above influences from increased power ratings. The case studies may provide a basic guideline that could assist potential users in making decisions between AFIM and RFIM for relevant applications.

Keywords: axial flux induction machine, electrical starter/generator, finite element analysis, squirrel cage induction machine

Procedia PDF Downloads 432
238 Forests, the Sanctuaries to Specialist and Rare Wild Native Bees at the Foothills of Western Himalayas

Authors: Preeti Virkar, V. P. Uniyal, Vinod Kumar Bhatt

Abstract:

With 50% decline in managed honey bee hives in the continents of Europe and America, farmers and landscape managers are turning to native wild bees for their essential ecosystem services of pollination. Wild bees population are too under danger due to the rapid land use changes from anthropogenic activities. With an escalating population reaching 9.0 billion by 2050, human-induced land use changes are predicted to further deteriorate the habitats of numerous species by the turn of this century. The status of bees are uncertain, especially in the tropical regions of the world, which also questions the crisis of global pollinator decline and their essential services to wild and managed flora. Our investigation collectively compares wild native bee diversity and their status in forests and agroecosystems in Doon Valley landscape, situated at the foothills of Himalayan ranges, Uttarakhand, India. We seek to ask whether (1) natural habitat are refuge to richer and rarer bees communities than the agroecosystems, (2) Are agroecosystems closer to natural habitats similar to them than agroecosystems farther away; hence support richer bee communities and hence, (3) Do polyculture farms support richer bee communities than monoculture. The data was collected using observation and pantrap sampling form February to May, 2012 to 2014. We recorded 43 species of bees in Doon Valley. They belonged to 5 families; Megachilidae, Apidae, Andrenidae, Halictidae and Collitidae. A multinomial model approach was used to classify the bees into 2 habitats, in which forests demonstrated to support greater number of specialist (26%, n= 11) species than agroecosystems (7%, n= 3). The valley had many species categorized as the rare (58%, n= 25) and very few generalists (9%, n=4). A linear regression model run on our data demonstrated higher bee diversity in agro-ecosystems in close proximity to forests (H’ for < 200 m = 1.60) compared to those further away (H’ for > 600 m = 0.56) (R2=0.782, SE=0.148, p value=0.004). Organic agriculture supported significantly greater species richness in comparison to conventional farms (Mann-Whitney U test, n1 = 33, n2 = 35; P = 0.001). Forests ecosystems are refuge to rare specialist groups and support bee communities in nearby agroecosystems. The findings of our investigation demonstrate the importance of natural habitats as a potential refuge for rare native wild bee pollinators. Polyculture in the valley behaves similar to natural habitats and supports diverse bee communities in comparison to conventional monocultures. Our study suggests that the farming communities adopt diverse organic agriculture systems to attract wild pollinators beneficial for better crop production. Forests are sanctuaries for bees to nest, forage, and breed. Therefore, our outcome also suggests landscape managers not only preserve protected areas but also enhance the floral diversity in semi-natural and urban areas.

Keywords: native bees, pollinators, polyculture, agroecosystem, natural habitat, diversity, monoculture, specialists, generalists

Procedia PDF Downloads 180
237 Review of Carbon Materials: Application in Alternative Energy Sources and Catalysis

Authors: Marita Pigłowska, Beata Kurc, Maciej Galiński

Abstract:

The application of carbon materials in the branches of the electrochemical industry shows an increasing tendency each year due to the many interesting properties they possess. These are, among others, a well-developed specific surface, porosity, high sorption capacity, good adsorption properties, low bulk density, electrical conductivity and chemical resistance. All these properties allow for their effective use, among others in supercapacitors, which can store electric charges of the order of 100 F due to carbon electrodes constituting the capacitor plates. Coals (including expanded graphite, carbon black, graphite carbon fibers, activated carbon) are commonly used in electrochemical methods of removing oil derivatives from water after tanker disasters, e.g. phenols and their derivatives by their electrochemical anodic oxidation. Phenol can occupy practically the entire surface of carbon material and leave the water clean of hydrophobic impurities. Regeneration of such electrodes is also not complicated, it is carried out by electrochemical methods consisting in unblocking the pores and reducing resistances, and thus their reactivation for subsequent adsorption processes. Graphite is commonly used as an anode material in lithium-ion cells, while due to the limited capacity it offers (372 mAh g-1), new solutions are sought that meet both capacitive, efficiency and economic criteria. Increasingly, biodegradable materials, green materials, biomass, waste (including agricultural waste) are used in order to reuse them and reduce greenhouse effects and, above all, to meet the biodegradability criterion necessary for the production of lithium-ion cells as chemical power sources. The most common of these materials are cellulose, starch, wheat, rice, and corn waste, e.g. from agricultural, paper and pharmaceutical production. Such products are subjected to appropriate treatments depending on the desired application (including chemical, thermal, electrochemical). Starch is a biodegradable polysaccharide that consists of polymeric units such as amylose and amylopectin that build an ordered (linear) and amorphous (branched) structure of the polymer. Carbon is also used as a catalyst. Elemental carbon has become available in many nano-structured forms representing the hybridization combinations found in the primary carbon allotropes, and the materials can be enriched with a large number of surface functional groups. There are many examples of catalytic applications of coal in the literature, but the development of this field has been hampered by the lack of a conceptual approach combining structure and function and a lack of understanding of material synthesis. In the context of catalytic applications, the integrity of carbon environmental management properties and parameters such as metal conductivity range and bond sequence management should be characterized. Such data, along with surface and textured information, can form the basis for the provision of network support services.

Keywords: carbon materials, catalysis, BET, capacitors, lithium ion cell

Procedia PDF Downloads 142
236 Altering the Solid Phase Speciation of Arsenic in Paddy Soil: An Approach to Reduce Rice Grain Arsenic Uptake

Authors: Supriya Majumder, Pabitra Banik

Abstract:

Fates of Arsenic (As) on the soil-plant environment belong to the critical emerging issue, which in turn to appraises the threatening implications of a human health risk — assessing the dynamics of As in soil solid components are likely to impose its potential availability towards plant uptake. In the present context, we introduced an improved Sequential Extraction Procedure (SEP) questioning to identify solid-phase speciation of As in paddy soil under variable soil environmental conditions during two consecutive seasons of rice cultivation practices. We coupled gradients of water management practices with the addition of fertilizer amendments to assess the changes in a partition of As through a field experimental study during monsoon and post-monsoon season using two rice cultivars. Water management regimes were varied based on the methods of cultivation of rice by Conventional (waterlogged) vis-a-vis System of Rice Intensification-SRI (saturated). Fertilizer amendment through the nutrient treatment of absolute control, NPK-RD, NPK-RD + Calcium silicate, NPK-RD + Ferrous sulfate, Farmyard manure (FYM), FYM + Calcium silicate, FYM + Ferrous sulfate, Vermicompost (VC), VC + Calcium silicate, VC + Ferrous sulfate were selected to construct the study. After harvest, soil samples were sequentially extracted to estimate partition of As among the different fractions such as: exchangeable (F1), specifically sorbed (F2), As bound to amorphous Fe oxides (F3), crystalline Fe oxides (F4), organic matter (F5) and residual phase (F6). Results showed that the major proportions of As were found in F3, F4 and F6, whereas F1 exhibited the lowest proportion of total soil As. Among the nutrient treatment mediated changes on As fractions, the application of organic manure and ferrous sulfate were significantly found to restrict the release of As from exchangeable phase. Meanwhile, conventional practice produced much higher release of As from F1 as compared to SRI, which may substantially increase the environmental risk. In contrast, SRI practice was found to retain a significantly higher proportion of As in F2, F3, and F4 phase resulting restricted mobilization of As. This was critically reflected towards rice grain As bioavailability where the reduction in grain As concentration of 33% and 55% in SRI concerning conventional treatment (p <0.05) during monsoon and post-monsoon season respectively. Also, prediction assay for rice grain As bioavailability based on the linear regression model was performed. Results demonstrated that rice grain As concentration was positively correlated with As concentration in F1 and negatively correlated with F2, F3, and F4 with a satisfactory level of variation being explained (p <0.001). Finally, we conclude that F1, F2, F3 and F4 are the major soil. As fractions critically may govern the potential availability of As in soil and suggest that rice cultivation with the SRI treatment is particularly at less risk of As availability in soil. Such exhaustive information may be useful for adopting certain management practices for rice grown in contaminated soil concerning to the environmental issues in particular.

Keywords: arsenic, fractionation, paddy soil, potential availability

Procedia PDF Downloads 101
235 Estimation of Soil Nutrient Content Using Google Earth and Pleiades Satellite Imagery for Small Farms

Authors: Lucas Barbosa Da Silva, Jun Okamoto Jr.

Abstract:

Precision Agriculture has long being benefited from crop fields’ aerial imagery. This important tool has allowed identifying patterns in crop fields, generating useful information to the production management. Reflectance intensity data in different ranges from the electromagnetic spectrum may indicate presence or absence of nutrients in the soil of an area. Different relations between the different light bands may generate even more detailed information. The knowledge of the nutrients content in the soil or in the crop during its growth is a valuable asset to the farmer that seeks to optimize its yield. However, small farmers in Brazil often lack the resources to access this kind information, and, even when they do, it is not presented in a comprehensive and/or objective way. So, the challenges of implementing this technology ranges from the sampling of the imagery, using aerial platforms, building of a mosaic with the images to cover the entire crop field, extracting the reflectance information from it and analyzing its relationship with the parameters of interest, to the display of the results in a manner that the farmer may take the necessary decisions more objectively. In this work, it’s proposed an analysis of soil nutrient contents based on image processing of satellite imagery and comparing its outtakes with commercial laboratory’s chemical analysis. Also, sources of satellite imagery are compared, to assess the feasibility of using Google Earth data in this application, and the impacts of doing so, versus the application of imagery from satellites like Landsat-8 and Pleiades. Furthermore, an algorithm for building mosaics is implemented using Google Earth imagery and finally, the possibility of using unmanned aerial vehicles is analyzed. From the data obtained, some soil parameters are estimated, namely, the content of Potassium, Phosphorus, Boron, Manganese, among others. The suitability of Google Earth Imagery for this application is verified within a reasonable margin, when compared to Pleiades Satellite imagery and to the current commercial model. It is also verified that the mosaic construction method has little or no influence on the estimation results. Variability maps are created over the covered area and the impacts of the image resolution and sample time frame are discussed, allowing easy assessments of the results. The final results show that easy and cheaper remote sensing and analysis methods are possible and feasible alternatives for the small farmer, with little access to technological and/or financial resources, to make more accurate decisions about soil nutrient management.

Keywords: remote sensing, precision agriculture, mosaic, soil, nutrient content, satellite imagery, aerial imagery

Procedia PDF Downloads 145
234 The Increasing Trend in Research Among Orthopedic Residency Applicants is Significant to Matching: A Retrospective Analysis

Authors: Nickolas A. Stewart, Donald C. Hefelfinger, Garrett V. Brittain, Timothy C. Frommeyer, Adrienne Stolfi

Abstract:

Orthopedic surgery is currently considered one of the most competitive specialties that medical students can apply to for residency training. As evidenced by increasing United States Medical Licensing Examination (USMLE) scores, overall grades, and publication, presentation, and abstract numbers, this specialty is getting increasingly competitive. The recent change of USMLE Step 1 scores to pass/fail has resulted in additional challenges for medical students planning to apply for orthopedic residency. Until now, these scores have been a tool used by residency programs to screen applicants as an initial factor to determine the strength of their application. With USMLE STEP 1 converting to a pass/fail grading criterion, the question remains as to what will take its place on the ERAS application. The primary objective of this study is to determine the trends in the number of research projects, abstracts, presentations, and publications among orthopedic residency applicants. Secondly, this study seeks to determine if there is a relationship between the number of research projects, abstracts, presentations, and publications, and match rates. The researchers utilized the National Resident Matching Program's Charting Outcomes in the Match between 2007 and 2022 to identify mean publications and research project numbers by allopathic and osteopathic US orthopedic surgery senior applicants. A paired t test was performed between the mean number of publications and research projects by matched and unmatched applicants. Additionally, simple linear regressions within matched and unmatched applicants were used to determine the association between year and number of abstracts, presentations, and publications, and a number of research projects. For determining whether the increase in the number of abstracts, presentations, and publications, and a number of research projects is significantly different between matched and unmatched applicants, an analysis of covariance is used with an interaction term added to the model, which represents the test for the difference between the slopes of each group. The data shows that from 2007 to 2022, the average number of research publications increased from 3 to 16.5 for matched orthopedic surgery applicants. The paired t-test had a significant p-value of 0.006 for the number of research publications between matched and unmatched applicants. In conclusion, the average number of publications for orthopedic surgery applicants has significantly increased for matched and unmatched applicants from 2007 to 2022. Moreover, this increase has accelerated in recent years, as evidenced by an increase of only 1.5 publications from 2007 to 2001 versus 5.0 publications from 2018 to 2022. The number of abstracts, presentations, and publications is a significant factor regarding an applicant's likelihood to successfully match into an orthopedic residency program. With USMLE Step 1 being converted to pass/fail, the researchers expect students and program directors will place increased importance on additional factors that can help them stand out. This study demonstrates that research will be a primary component in stratifying future orthopedic surgery applicants. In addition, this suggests the average number of research publications will continue to accelerate. Further study is required to determine whether this growth is sustainable.

Keywords: publications, orthopedic surgery, research, residency applications

Procedia PDF Downloads 109
233 Healthcare Utilization and Costs of Specific Obesity Related Health Conditions in Alberta, Canada

Authors: Sonia Butalia, Huong Luu, Alexis Guigue, Karen J. B. Martins, Khanh Vu, Scott W. Klarenbach

Abstract:

Obesity-related health conditions impose a substantial economic burden on payers due to increased healthcare use. Estimates of healthcare resource use and costs associated with obesity-related comorbidities are needed to inform policies and interventions targeting these conditions. Methods: Adults living with obesity were identified (a procedure-related body mass index code for class 2/3 obesity between 2012 and 2019 in Alberta, Canada; excluding those with bariatric surgery), and outcomes were compared over 1-year (2019/2020) between those who had and did not have specific obesity-related comorbidities. The probability of using a healthcare service (based on the odds ratio of a zero [OR-zero] cost) was compared; 95% confidence intervals (CI) were reported. Logistic regression and a generalized linear model with log link and gamma distribution were used for total healthcare cost comparisons ($CDN); cost ratios and estimated cost differences (95% CI) were reported. Potential socio-demographic and clinical confounders were adjusted for, and incremental cost differences were representative of a referent case. Results: A total of 220,190 adults living with obesity were included; 44% had hypertension, 25% had osteoarthritis, 24% had type-2 diabetes, 17% had cardiovascular disease, 12% had insulin resistance, 9% had chronic back pain, and 4% of females had polycystic ovarian syndrome (PCOS). The probability of hospitalization, ED visit, and ambulatory care was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (hospitalization: 1.8-times [OR-zero: 0.57 [0.55/0.59]] / ED visit: 1.9-times [OR-zero: 0.54 [0.53/0.56]] / ambulatory care visit: 2.4-times [OR-zero: 0.41 [0.40/0.43]]), cardiovascular disease (2.7-times [OR-zero: 0.37 [0.36/0.38]] / 1.9-times [OR-zero: 0.52 [0.51/0.53]] / 2.8-times [OR-zero: 0.36 [0.35/0.36]]), osteoarthritis (2.0-times [OR-zero: 0.51 [0.50/0.53]] / 1.4-times [OR-zero: 0.74 [0.73/0.76]] / 2.5-times [OR-zero: 0.40 [0.40/0.41]]), type-2 diabetes (1.9-times [OR-zero: 0.54 [0.52/0.55]] / 1.4-times [OR-zero: 0.72 [0.70/0.73]] / 2.1-times [OR-zero: 0.47 [0.46/0.47]]), hypertension (1.8-times [OR-zero: 0.56 [0.54/0.57]] / 1.3-times [OR-zero: 0.79 [0.77/0.80]] / 2.2-times [OR-zero: 0.46 [0.45/0.47]]), PCOS (not significant / 1.2-times [OR-zero: 0.83 [0.79/0.88]] / not significant), and insulin resistance (1.1-times [OR-zero: 0.88 [0.84/0.91]] / 1.1-times [OR-zero: 0.92 [0.89/0.94]] / 1.8-times [OR-zero: 0.56 [0.54/0.57]]). After fully adjusting for potential confounders, the total healthcare cost ratio was higher in those with a following obesity-related comorbidity versus those without: chronic back pain (1.54-times [1.51/1.56]), cardiovascular disease (1.45-times [1.43/1.47]), osteoarthritis (1.36-times [1.35/1.38]), type-2 diabetes (1.30-times [1.28/1.31]), hypertension (1.27-times [1.26/1.28]), PCOS (1.08-times [1.05/1.11]), and insulin resistance (1.03-times [1.01/1.04]). Conclusions: Adults with obesity who have specific disease-related health conditions have a higher probability of healthcare use and incur greater costs than those without specific comorbidities; incremental costs are larger when other obesity-related health conditions are not adjusted for. In a specific referent case, hypertension was costliest (44% had this condition with an additional annual cost of $715 [$678/$753]). If these findings hold for the Canadian population, hypertension in persons with obesity represents an estimated additional annual healthcare cost of $2.5 billion among adults living with obesity (based on an adult obesity rate of 26%). Results of this study can inform decision making on investment in interventions that are effective in treating obesity and its complications.

Keywords: administrative data, healthcare cost, obesity-related comorbidities, real world evidence

Procedia PDF Downloads 123
232 Phytochemical and Vitamin Composition of Wild Edible Plants Consumed in South West Ethiopia

Authors: Abebe Yimer, Sirawdink Fikereyesus Forsido, Getachew Addis, Abebe Ayelign

Abstract:

Background: Oxidative stress has been an important health problem as itinduceschronic diseases such as cancer, cardiovascular, diabetics, and neurodegenerative disease. Plant source natural antioxidant has gained attention as synthetic antioxidant negatively impact human health. Wild edible plants arecheap source of dietary-medicine in mainly rural communityin south-west Ethiopia and elsewhere the country. Thus, the study aimed to determine total pheneol,flavoinoids, antioxidant, vitamin C, and beta-carotene content from wild edible plants Solanum nigrum L., Vigna membranacea A. Rich, Dioscorea praehensilis Benth., Trilepisium madagascariense D.C.andCleome gynandra L. Methods: Methanol was used to extract samples of oven-dried edible plants. Total phenolic compound (TPC) was determined using a Folin Ciocalteu method, whereas total flavonoid content (TFC) was determined using the Aluminium chloride colorimetric method. By using 2, 2-diphenyl-1-picrylhydrazyl (DPPH) and ferric reducing antioxidant power (FRAP) tests, antioxidant activities were evaluated in vitro. Additionally, beta-carotene was assessed using a spectrophotometric technique, whilst vitamin C was determined using a titration approach. Results: Total flavonoid contentranged from 0.85±0.03 to 11.25±0.01 mg CE/g in D. praehensilis Benth. tuber and C. gynandra L, respectively. Total phenolic compounds varied from 0.25±0.06 GAE/g in D. praehensilis Benth tuber to 35.73±2.52 GAE/g in S.nigrum L. leaves. In the DPPH test, the highest antioxidant value (87.65%) was obtained in the S.nigrum L. leaves, whereas the smallest amount of antioxidant (50.12%)was contained in D. praehensilis Benth tuber. Similarly in FRAP assay,D. praehensilis Benth tuber showed the least reducing potential(49.16± 2.13mM Fe2+/100 g)whilst the highest reducing potential was presented in the S.nigrum L. leaves(188.12±1.13 mM Fe2+/100 g). The beta-carotene content was found between 11.81±0.00 mg/100g in D. praehensilis Benth tubers to 34.49±0.95 mg/100g in V. membranacea A. Rich leaves. The concentration of vitamin C ranged from 10.00±0.61 in D. praehensilis Benth tubers to 45±1.80 mg/100g in V. membranacea A. Rich leaves. The results showed that high positive linear correlations between TPC and TFC of WEPs (r=0.828), as well as between FRAP and total phenolic contents (r = 0.943) and FRAP and vitamin C (r= 0.928). Conclusion: These findings showed the total phenolic and flavonoid contents of Solanum nigrum L. and Cleome gynandra L, respectively, are abundant. The outcome may be used as a natural supply of dietary antioxidants, which may be useful in preventing oxidative stress. The study's findings also showed that Vigna membranacea A. Rich leaves were cheap source of vitamin C and beta-carotene for people who consumed these wild green. Additional research on the in vivo antioxidant activity, toxicological analysis, and promotion of these wild food plants for agricultural production should be taken into consideration.

Keywords: antioxidant activity, beta-carotene, flavonoids, phenolic content, and vitamin c

Procedia PDF Downloads 73
231 The Association between Attachment Styles, Satisfaction of Life, Alexithymia, and Psychological Resilience: The Mediational Role of Self-Esteem

Authors: Zahide Tepeli Temiz, Itir Tari Comert

Abstract:

Attachment patterns based on early emotional interactions between infant and primary caregiver continue to be influential in adult life, in terms of mental health and behaviors of individuals. Several studies reveal that infant-caregiver relationships have impressed the affect regulation, coping with stressful and negative situations, general satisfaction of life, and self image in adulthood, besides the attachment styles. The present study aims to examine the relationships between university students’ attachment style and their self-esteem, alexithymic features, satisfaction of life, and level of resilience. In line with this aim, the hypothesis of the prediction of attachment styles (anxious and avoidant) over life satisfaction, self-esteem, alexithymia, and psychological resilience was tested. Additionally, in this study Structural Equational Modeling was conducted to investigate the mediational role of self-esteem in the relationship between attachment styles and alexithymia, life satisfaction, and resilience. This model was examined with path analysis. The sample of the research consists of 425 university students who take education from several region of Turkey. The participants who sign the informed consent completed the Demographic Information Form, Experiences in Close Relationships-Revised, Rosenberg Self-Esteem Scale, The Satisfaction with Life Scale, Toronto Alexithymia Scale, and Resilience Scale for Adults. According to results, anxious, and avoidant dimensions of insecure attachment predicted the self-esteem score and alexithymia in positive direction. On the other hand, these dimensions of attachment predicted life satisfaction in negative direction. The results of linear regression analysis indicated that anxious and avoidant attachment styles didn’t predict the resilience. This result doesn’t support the theory and research indicating the relationship between attachment style and psychological resilience. The results of path analysis revealed the mediational role self esteem in the relation between anxious, and avoidant attachment styles and life satisfaction. In addition, SEM analysis indicated the indirect effect of attachment styles over alexithymia and resilience besides their direct effect. These findings support the hypothesis of this research relation to mediating role of self-esteem. Attachment theorists suggest that early attachment experiences, including supportive and responsive family interactions, have an effect on resilience to harmful situations in adult life, ability to identify, describe, and regulate emotions and also general satisfaction with life. Several studies examining the relationship between attachment styles and life satisfaction, alexithymia, and psychological resilience draw attention to mediational role of self-esteem. Results of this study support the theory of attachment patterns with the mediation of self-image influence the emotional, cognitive, and behavioral regulation of person throughout the adulthood. Therefore, it is thought that any intervention intended for recovery in attachment relationship will increase the self-esteem, life satisfaction, and resilience level, on the one side, decrease the alexithymic features, on the other side.

Keywords: alexithymia, anxious attachment, avoidant attachment, life satisfaction, path analysis, resilience, self-esteem, structural equation

Procedia PDF Downloads 169
230 Video Analytics on Pedagogy Using Big Data

Authors: Jamuna Loganath

Abstract:

Education is the key to the development of any individual’s personality. Today’s students will be tomorrow’s citizens of the global society. The education of the student is the edifice on which his/her future will be built. Schools therefore should provide an all-round development of students so as to foster a healthy society. The behaviors and the attitude of the students in school play an essential role for the success of the education process. Frequent reports of misbehaviors such as clowning, harassing classmates, verbal insults are becoming common in schools today. If this issue is left unattended, it may develop a negative attitude and increase the delinquent behavior. So, the need of the hour is to find a solution to this problem. To solve this issue, it is important to monitor the students’ behaviors in school and give necessary feedback and mentor them to develop a positive attitude and help them to become a successful grownup. Nevertheless, measuring students’ behavior and attitude is extremely challenging. None of the present technology has proven to be effective in this measurement process because actions, reactions, interactions, response of the students are rarely used in the course of the data due to complexity. The purpose of this proposal is to recommend an effective supervising system after carrying out a feasibility study by measuring the behavior of the Students. This can be achieved by equipping schools with CCTV cameras. These CCTV cameras installed in various schools of the world capture the facial expressions and interactions of the students inside and outside their classroom. The real time raw videos captured from the CCTV can be uploaded to the cloud with the help of a network. The video feeds get scooped into various nodes in the same rack or on the different racks in the same cluster in Hadoop HDFS. The video feeds are converted into small frames and analyzed using various Pattern recognition algorithms and MapReduce algorithm. Then, the video frames are compared with the bench marking database (good behavior). When misbehavior is detected, an alert message can be sent to the counseling department which helps them in mentoring the students. This will help in improving the effectiveness of the education process. As Video feeds come from multiple geographical areas (schools from different parts of the world), BIG DATA helps in real time analysis as it analyzes computationally to reveal patterns, trends, and associations, especially relating to human behavior and interactions. It also analyzes data that can’t be analyzed by traditional software applications such as RDBMS, OODBMS. It has also proven successful in handling human reactions with ease. Therefore, BIG DATA could certainly play a vital role in handling this issue. Thus, effectiveness of the education process can be enhanced with the help of video analytics using the latest BIG DATA technology.

Keywords: big data, cloud, CCTV, education process

Procedia PDF Downloads 218
229 A Comparative Assessment of Information Value, Fuzzy Expert System Models for Landslide Susceptibility Mapping of Dharamshala and Surrounding, Himachal Pradesh, India

Authors: Kumari Sweta, Ajanta Goswami, Abhilasha Dixit

Abstract:

Landslide is a geomorphic process that plays an essential role in the evolution of the hill-slope and long-term landscape evolution. But its abrupt nature and the associated catastrophic forces of the process can have undesirable socio-economic impacts, like substantial economic losses, fatalities, ecosystem, geomorphologic and infrastructure disturbances. The estimated fatality rate is approximately 1person /100 sq. Km and the average economic loss is more than 550 crores/year in the Himalayan belt due to landslides. This study presents a comparative performance of a statistical bivariate method and a machine learning technique for landslide susceptibility mapping in and around Dharamshala, Himachal Pradesh. The final produced landslide susceptibility maps (LSMs) with better accuracy could be used for land-use planning to prevent future losses. Dharamshala, a part of North-western Himalaya, is one of the fastest-growing tourism hubs with a total population of 30,764 according to the 2011 census and is amongst one of the hundred Indian cities to be developed as a smart city under PM’s Smart Cities Mission. A total of 209 landslide locations were identified in using high-resolution linear imaging self-scanning (LISS IV) data. The thematic maps of parameters influencing landslide occurrence were generated using remote sensing and other ancillary data in the GIS environment. The landslide causative parameters used in the study are slope angle, slope aspect, elevation, curvature, topographic wetness index, relative relief, distance from lineaments, land use land cover, and geology. LSMs were prepared using information value (Info Val), and Fuzzy Expert System (FES) models. Info Val is a statistical bivariate method, in which information values were calculated as the ratio of the landslide pixels per factor class (Si/Ni) to the total landslide pixel per parameter (S/N). Using this information values all parameters were reclassified and then summed in GIS to obtain the landslide susceptibility index (LSI) map. The FES method is a machine learning technique based on ‘mean and neighbour’ strategy for the construction of fuzzifier (input) and defuzzifier (output) membership function (MF) structure, and the FR method is used for formulating if-then rules. Two types of membership structures were utilized for membership function Bell-Gaussian (BG) and Trapezoidal-Triangular (TT). LSI for BG and TT were obtained applying membership function and if-then rules in MATLAB. The final LSMs were spatially and statistically validated. The validation results showed that in terms of accuracy, Info Val (83.4%) is better than BG (83.0%) and TT (82.6%), whereas, in terms of spatial distribution, BG is best. Hence, considering both statistical and spatial accuracy, BG is the most accurate one.

Keywords: bivariate statistical techniques, BG and TT membership structure, fuzzy expert system, information value method, machine learning technique

Procedia PDF Downloads 103
228 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 107
227 Engineering Economic Analysis of Implementing a Materials Recovery Facility in Jamaica: A Green Industry Approach towards a Sustainable Developing Economy

Authors: Damian Graham, Ashleigh H. Hall, Damani R. Sulph, Michael A. James, Shawn B. Vassell

Abstract:

This paper assesses the design and feasibility of a Materials Recovery Facility (MRF) in Jamaica as a possible green industry approach to the nation’s economic and solid waste management problems. Jamaica is a developing nation that is vulnerable to climate change that can affect its blue economy and tourism on which it is heavily reliant. Jamaica’s National Solid Waste Management Authority (NSWMA) collects only a fraction of all the solid waste produced annually which is then transported to dumpsites. The remainder is either burnt by the population or disposed of illegally. These practices negatively impact the environment, threaten the sustainability of economic growth from blue economy and tourism and its waste management system is predominantly a cost centre. The implementation of an MRF could boost the manufacturing sector, contribute to economic growth, and be a catalyst in creating a green industry with multiple downstream value chains with supply chain linkages. Globally, there is a trend to reuse and recycle that created an international market for recycled solid waste. MRFs enable the efficient sorting of solid waste into desired recoverable materials thus providing a gateway for entrance to the international trading of recycled waste. Research into the current state and effort to improve waste management in Jamaica in contrast with the similar and more advanced territories are outlined. The study explores the concept of green industrialization and its applicability to vulnerable small state economies like Jamaica. The study highlights the possible contributions and benefits derived from MRFs as a seeding factory that can anchor the reverse and forward logistics of other green industries as part of a logistic-cantered economy. Further, the study showcases an engineering economic analysis that assesses the viability of the implementation of an MRF in Jamaica. This research outlines the potential cost of constructing and operating an MRF and provides a realistic cash flow estimate to establish a baseline for profitability. The approach considers quantitative and qualitative data, assumptions, and modelling using industrial engineering tools and techniques that are outlined. Techniques of facility planning, system analysis and operations research with a focus on linear programming techniques are expressed. Approaches to overcome some implementation challenges including policy, technology and public education are detailed. The results of this study present a reasonable judgment of the prospects of incorporating an MRF to improve Jamaica’s solid waste management and contribute to socioeconomic and environmental benefits and an alternate pathway for economic sustainability.

Keywords: engineering-economic analysis, facility design, green industry, MRF, manufacturing, plant layout, solid-waste management, sustainability, waste disposal

Procedia PDF Downloads 190
226 Behavioral Analysis of Anomalies in Intertemporal Choices Through the Concept of Impatience and Customized Strategies for Four Behavioral Investor Profiles With an Application of the Analytic Hierarchy Process: A Case Study

Authors: Roberta Martino, Viviana Ventre

Abstract:

The Discounted Utility Model is the essential reference for calculating the utility of intertemporal prospects. According to this model, the value assigned to an outcome is the smaller the greater the distance between the moment in which the choice is made and the instant in which the outcome is perceived. This diminution determines the intertemporal preferences of the individual, the psychological significance of which is encapsulated in the discount rate. The classic model provides a discount rate of linear or exponential nature, necessary for temporally consistent preferences. Empirical evidence, however, has proven that individuals apply discount rates with a hyperbolic nature generating the phenomenon of intemporal inconsistency. What this means is that individuals have difficulty managing their money and future. Behavioral finance, which analyzes the investor's attitude through cognitive psychology, has made it possible to understand that beyond individual financial competence, there are factors that condition choices because they alter the decision-making process: behavioral bias. Since such cognitive biases are inevitable, to improve the quality of choices, research has focused on a personalized approach to strategies that combines behavioral finance with personality theory. From the considerations, it emerges the need to find a procedure to construct the personalized strategies that consider the personal characteristics of the client, such as age or gender, and his personality. The work is developed in three parts. The first part discusses and investigates the weight of the degree of impatience and impatience decrease in the anomalies of the discounted utility model. Specifically, the degree of decrease in impatience quantifies the impact that emotional factors generated by haste and financial market agitation have on decision making. The second part considers the relationship between decision making and personality theory. Specifically, four behavioral categories associated with four categories of behavioral investors are considered. This association allows us to interpret intertemporal choice as a combination of bias and temperament. The third part of the paper presents a method for constructing personalized strategies using Analytic Hierarchy Process. Briefly: the first level of the analytic hierarchy process considers the goal of the strategic plan; the second level considers the four temperaments; the third level compares the temperaments with the anomalies of the discounted utility model; and the fourth level contains the different possible alternatives to be selected. The weights of the hierarchy between level 2 and level 3 are constructed considering the degrees of decrease in impatience derived for each temperament with an experimental phase. The results obtained confirm the relationship between temperaments and anomalies through the degree of decrease in impatience and highlight that the actual impact of emotions in decision making. Moreover, it proposes an original and useful way to improve financial advice. Inclusion of additional levels in the Analytic Hierarchy Process can further improve strategic personalization.

Keywords: analytic hierarchy process, behavioral finance anomalies, intertemporal choice, personalized strategies

Procedia PDF Downloads 71
225 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 135
224 Design of Smart Catheter for Vascular Applications Using Optical Fiber Sensor

Authors: Lamiek Abraham, Xinli Du, Yohan Noh, Polin Hsu, Tingting Wu, Tom Logan, Ifan Yen

Abstract:

In the field of minimally invasive, smart medical instruments such as catheters and guidewires are typically used at a remote distance to gain access to the diseased artery, often negotiating tortuous, complex, and diseased vessels in the process. Three optical fiber sensors with a diameter of 1.5mm each that are 120° apart from each other is proposed to be mounted into a catheter-based pump device with a diameter of 10mm. These sensors are configured to solve the challenges surgeons face during insertion through curvy major vessels such as the aortic arch. Moreover, these sensors deal with providing information on rubbing the walls and shape sensing. This study presents an experimental and mathematical models of the optical fiber sensors with 2 degrees of freedom. There are two eight gear-shaped tubes made up of 3D printed thermoplastic Polyurethane (TPU) material that are connected. The optical fiber sensors are mounted inside the first tube for protection from external light and used TPU material as a prototype for a catheter. The second tube is used as a flat reflection for the light intensity modulation-based optical fiber sensors. The first tube is attached to the linear guide for insertion and withdrawal purposes and can manually turn it 45° by manipulating the tube gear. A 3D hard material phantom was developed that mimics the aortic arch anatomy structure in which the test was carried out. During the insertion of the sensors into the 3D phantom, datasets are obtained in terms of voltage, distance, and position of the sensors. These datasets reflect the characteristics of light intensity modulation of the optical fiber sensors with a plane project of the aortic arch structure shape. Mathematical modeling of the light intensity was carried out based on the projection plane and experiment set-up. The performance of the system was evaluated in terms of its accuracy in navigating through the curvature and information on the position of the sensors by investigating 40 single insertions of the sensors into the 3D phantom. The experiment demonstrated that the sensors were effectively steered through the 3D phantom curvature and to desired target references in all 2 degrees of freedom. The performance of the sensors echoes the reflectance of light theory, where the smaller the radius of curvature, the more of the shining LED lights are reflected and received by the photodiode. A mathematical model results are in good agreement with the experiment result and the operation principle of the light intensity modulation of the optical fiber sensors. A prototype of a catheter using TPU material with three optical fiber sensors mounted inside has been developed that is capable of navigating through the different radius of curvature with 2 degrees of freedom. The proposed system supports operators with pre-scan data to make maneuverability and bendability through curvy major vessels easier, accurate, and safe. The mathematical modelling accurately fits the experiment result.

Keywords: Intensity modulated optical fiber sensor, mathematical model, plane projection, shape sensing.

Procedia PDF Downloads 209
223 Analytical and Numerical Modeling of Strongly Rotating Rarefied Gas Flows

Authors: S. Pradhan, V. Kumaran

Abstract:

Centrifugal gas separation processes effect separation by utilizing the difference in the mole fraction in a high speed rotating cylinder caused by the difference in molecular mass, and consequently the centrifugal force density. These have been widely used in isotope separation because chemical separation methods cannot be used to separate isotopes of the same chemical species. More recently, centrifugal separation has also been explored for the separation of gases such as carbon dioxide and methane. The efficiency of separation is critically dependent on the secondary flow generated due to temperature gradients at the cylinder wall or due to inserts, and it is important to formulate accurate models for this secondary flow. The widely used Onsager model for secondary flow is restricted to very long cylinders where the length is large compared to the diameter, the limit of high stratification parameter, where the gas is restricted to a thin layer near the wall of the cylinder, and it assumes that there is no mass difference in the two species while calculating the secondary flow. There are two objectives of the present analysis of the rarefied gas flow in a rotating cylinder. The first is to remove the restriction of high stratification parameter, and to generalize the solutions to low rotation speeds where the stratification parameter may be O (1), and to apply for dissimilar gases considering the difference in molecular mass of the two species. Secondly, we would like to compare the predictions with molecular simulations based on the direct simulation Monte Carlo (DSMC) method for rarefied gas flows, in order to quantify the errors resulting from the approximations at different aspect ratios, Reynolds number and stratification parameter. In this study, we have obtained analytical and numerical solutions for the secondary flows generated at the cylinder curved surface and at the end-caps due to linear wall temperature gradient and external gas inflow/outflow at the axis of the cylinder. The effect of sources of mass, momentum and energy within the flow domain are also analyzed. The results of the analytical solutions are compared with the results of DSMC simulations for three types of forcing, a wall temperature gradient, inflow/outflow of gas along the axis, and mass/momentum input due to inserts within the flow. The comparison reveals that the boundary conditions in the simulations and analysis have to be matched with care. The commonly used diffuse reflection boundary conditions at solid walls in DSMC simulations result in a non-zero slip velocity as well as a temperature slip (gas temperature at the wall is different from wall temperature). These have to be incorporated in the analysis in order to make quantitative predictions. In the case of mass/momentum/energy sources within the flow, it is necessary to ensure that the homogeneous boundary conditions are accurately satisfied in the simulations. When these precautions are taken, there is excellent agreement between analysis and simulations, to within 10 %, even when the stratification parameter is as low as 0.707, the Reynolds number is as low as 100 and the aspect ratio (length/diameter) of the cylinder is as low as 2, and the secondary flow velocity is as high as 0.2 times the maximum base flow velocity.

Keywords: rotating flows, generalized onsager and carrier-Maslen model, DSMC simulations, rarefied gas flow

Procedia PDF Downloads 372
222 In vitro Antioxidant Activity and Total Phenolic Content of Dillenia indica and Garcinia penducalata, Commonly Used Fruits in Assamese Cuisine

Authors: M. Das, B. P. Sarma, G. Ahmed

Abstract:

Human diet can be a major source of antioxidants. Poly¬phenols, which are organic compounds present in the regular human diet, have good antioxidant property. Most of the diseases are detected too late and that cause irre¬versible damage to the body. Therefore food that forms the natural source of antioxidants can prevent free radi¬cals from damaging our body tissues. Dillenia indica and Garcinia penducalata are two major fruits, easily available in Assam, North eastern Indian state. In the present study, the in vitro antioxi¬dant properties of the fruits of these plants are compared as the decoction of these fruits form a major part of Assamese cuisine. DPPH free radical scavenging activity of the methanol, petroleum ether and water extracts of G. penducalata and D. indica fruits were carried out by the methods of Cotelle A et al. (1996). Different concentrations ranging from 10–110 ug/ml of the extracts were added to 100 uM of DPPH (2,2, Diphenyl-2-picryl hydrazyl) and the absor¬bance was read at 517 nm after incubation. Ascorbic acid was used as the standard. Different concentrations of the methanol, petroleum ether and water extracts of G. penducalata and D. indica fruits were mixed with sodium nitroprusside and incubated. Griess reagent was added to the mixtures and their optical density was read at 546 nm following the method of Marcocci et al. (1994). Ascorbic acid was used as the standard. In order to find the scavenging activity of the extracts against hydroxyl radicals, the method of Kunchandy & Ohkawa (1990) was followed.The superoxide scavenging activity of the methanol, petroleum ether and water extracts of the fruits was deter¬mined by the method of Robak & Gryglewski (1998).Six replicates were maintained in each of the experiments and their SEM was evaluated based on which, non linear regres¬sion (curve fit), exponential growth were derived to calculate the IC50 values of the SAWE and standard compounds. All the statistical analyses were done by using paired t test. The hydroxyl radical scavenging activity of the various extracts of D. indica exhibited IC50 values < 110 ug/ml concentration, the scavenging activity of the extracts of G. penducalata was surprisingly>110 ug/ml.Similarly the oxygen free radical scavenging activity of the different extracts of D. indica exhibited an IC50 value of <110 ug/ml but the methanolic extract of the same exhib¬ited a better free radical scavenging activity compared to that of vitamin C. The methanolic extract of D. indica exhibited an IC50 value better than that of vitamin C. The DPPH scavenging activities of the various extracts of D. indica and G. penducalata were <110 ug/ml but the methanolic extract of D. indica exhibited an IC50 value bet¬ter than that of vitaminc C.The higher amounts of phenolic content in the methanolic extract of D. indica might be one of the major causes for its enhanced in vitro antioxidant activity.The present study concludes that Dillenia indica and Garcinia penducalata both possesses anti oxidant activi¬ties. The anti oxidant activity of Dillenia indica is superior to that of Garcinia penducalata due to its higher phenolic content

Keywords: antioxidants, free radicals, phenolic, scavenging

Procedia PDF Downloads 569
221 4D Monitoring of Subsurface Conditions in Concrete Infrastructure Prior to Failure Using Ground Penetrating Radar

Authors: Lee Tasker, Ali Karrech, Jeffrey Shragge, Matthew Josh

Abstract:

Monitoring for the deterioration of concrete infrastructure is an important assessment tool for an engineer and difficulties can be experienced with monitoring for deterioration within an infrastructure. If a failure crack, or fluid seepage through such a crack, is observed from the surface often the source location of the deterioration is not known. Geophysical methods are used to assist engineers with assessing the subsurface conditions of materials. Techniques such as Ground Penetrating Radar (GPR) provide information on the location of buried infrastructure such as pipes and conduits, positions of reinforcements within concrete blocks, and regions of voids/cavities behind tunnel lining. This experiment underlines the application of GPR as an infrastructure-monitoring tool to highlight and monitor regions of possible deterioration within a concrete test wall due to an increase in the generation of fractures; in particular, during a time period of applied load to a concrete wall up to and including structural failure. A three-point load was applied to a concrete test wall of dimensions 1700 x 600 x 300 mm³ in increments of 10 kN, until the wall structurally failed at 107.6 kN. At each increment of applied load, the load was kept constant and the wall was scanned using GPR along profile lines across the wall surface. The measured radar amplitude responses of the GPR profiles, at each applied load interval, were reconstructed into depth-slice grids and presented at fixed depth-slice intervals. The corresponding depth-slices were subtracted from each data set to compare the radar amplitude response between datasets and monitor for changes in the radar amplitude response. At lower values of applied load (i.e., 0-60 kN), few changes were observed in the difference of radar amplitude responses between data sets. At higher values of applied load (i.e., 100 kN), closer to structural failure, larger differences in radar amplitude response between data sets were highlighted in the GPR data; up to 300% increase in radar amplitude response at some locations between the 0 kN and 100 kN radar datasets. Distinct regions were observed in the 100 kN difference dataset (i.e., 100 kN-0 kN) close to the location of the final failure crack. The key regions observed were a conical feature located between approximately 3.0-12.0 cm depth from surface and a vertical linear feature located approximately 12.1-21.0 cm depth from surface. These key regions have been interpreted as locations exhibiting an increased change in pore-space due to increased mechanical loading, or locations displaying an increase in volume of micro-cracks, or locations showing the development of a larger macro-crack. The experiment showed that GPR is a useful geophysical monitoring tool to assist engineers with highlighting and monitoring regions of large changes of radar amplitude response that may be associated with locations of significant internal structural change (e.g. crack development). GPR is a non-destructive technique that is fast to deploy in a production setting. GPR can assist with reducing risk and costs in future infrastructure maintenance programs by highlighting and monitoring locations within the structure exhibiting large changes in radar amplitude over calendar-time.

Keywords: 4D GPR, engineering geophysics, ground penetrating radar, infrastructure monitoring

Procedia PDF Downloads 145
220 Culvert Blockage Evaluation Using Australian Rainfall And Runoff 2019

Authors: Rob Leslie, Taher Karimian

Abstract:

The blockage of cross drainage structures is a risk that needs to be understood and managed or lessened through the design. A blockage is a random event, influenced by site-specific factors, which needs to be quantified for design. Under and overestimation of blockage can have major impacts on flood risk and cost associated with drainage structures. The importance of this matter is heightened for those projects located within sensitive lands. It is a particularly complex problem for large linear infrastructure projects (e.g., rail corridors) located within floodplains where blockage factors can influence flooding upstream and downstream of the infrastructure. The selection of the appropriate blockage factors for hydraulic modeling has been subject to extensive research by hydraulic engineers. This paper has been prepared to review the current Australian Rainfall and Runoff 2019 (ARR 2019) methodology for blockage assessment by applying this method to a transport corridor brownfield upgrade case study in New South Wales. The results of applying the method are also validated against asset data and maintenance records. ARR 2019 – Book 6, Chapter 6 includes advice and an approach for estimating the blockage of bridges and culverts. This paper concentrates specifically on the blockage of cross drainage structures. The method has been developed to estimate the blockage level for culverts affected by sediment or debris due to flooding. The objective of the approach is to evaluate a numerical blockage factor that can be utilized in a hydraulic assessment of cross drainage structures. The project included an assessment of over 200 cross drainage structures. In order to estimate a blockage factor for use in the hydraulic model, a process has been advanced that considers the qualitative factors (e.g., Debris type, debris availability) and site-specific hydraulic factors that influence blockage. A site rating associated with the debris potential (i.e., availability, transportability, mobility) at each crossing was completed using the method outlined in ARR 2019 guidelines. The hydraulic results inputs (i.e., flow velocity, flow depth) and qualitative factors at each crossing were developed into an advanced spreadsheet where the design blockage level for cross drainage structures were determined based on the condition relating Inlet Clear Width and L10 (average length of the longest 10% of the debris reaching the site) and the Adjusted Debris Potential. Asset data, including site photos and maintenance records, were then reviewed and compared with the blockage assessment to check the validity of the results. The results of this assessment demonstrate that the estimated blockage factors at each crossing location using ARR 2019 guidelines are well-validated with the asset data. The primary finding of the study is that the ARR 2019 methodology is a suitable approach for culvert blockage assessment that has been validated against a case study spanning a large geographical area and multiple sub-catchments. The study also found that the methodology can be effectively coded within a spreadsheet or similar analytical tool to automate its application.

Keywords: ARR 2019, blockage, culverts, methodology

Procedia PDF Downloads 306
219 Development of an EEG-Based Real-Time Emotion Recognition System on Edge AI

Authors: James Rigor Camacho, Wansu Lim

Abstract:

Over the last few years, the development of new wearable and processing technologies has accelerated in order to harness physiological data such as electroencephalograms (EEGs) for EEG-based applications. EEG has been demonstrated to be a source of emotion recognition signals with the highest classification accuracy among physiological signals. However, when emotion recognition systems are used for real-time classification, the training unit is frequently left to run offline or in the cloud rather than working locally on the edge. That strategy has hampered research, and the full potential of using an edge AI device has yet to be realized. Edge AI devices are computers with high performance that can process complex algorithms. It is capable of collecting, processing, and storing data on its own. It can also analyze and apply complicated algorithms like localization, detection, and recognition on a real-time application, making it a powerful embedded device. The NVIDIA Jetson series, specifically the Jetson Nano device, was used in the implementation. The cEEGrid, which is integrated to the open-source brain computer-interface platform (OpenBCI), is used to collect EEG signals. An EEG-based real-time emotion recognition system on Edge AI is proposed in this paper. To perform graphical spectrogram categorization of EEG signals and to predict emotional states based on input data properties, machine learning-based classifiers were used. Until the emotional state was identified, the EEG signals were analyzed using the K-Nearest Neighbor (KNN) technique, which is a supervised learning system. In EEG signal processing, after each EEG signal has been received in real-time and translated from time to frequency domain, the Fast Fourier Transform (FFT) technique is utilized to observe the frequency bands in each EEG signal. To appropriately show the variance of each EEG frequency band, power density, standard deviation, and mean are calculated and employed. The next stage is to identify the features that have been chosen to predict emotion in EEG data using the K-Nearest Neighbors (KNN) technique. Arousal and valence datasets are used to train the parameters defined by the KNN technique.Because classification and recognition of specific classes, as well as emotion prediction, are conducted both online and locally on the edge, the KNN technique increased the performance of the emotion recognition system on the NVIDIA Jetson Nano. Finally, this implementation aims to bridge the research gap on cost-effective and efficient real-time emotion recognition using a resource constrained hardware device, like the NVIDIA Jetson Nano. On the cutting edge of AI, EEG-based emotion identification can be employed in applications that can rapidly expand the research and implementation industry's use.

Keywords: edge AI device, EEG, emotion recognition system, supervised learning algorithm, sensors

Procedia PDF Downloads 80
218 Modeling of Foundation-Soil Interaction Problem by Using Reduced Soil Shear Modulus

Authors: Yesim Tumsek, Erkan Celebi

Abstract:

In order to simulate the infinite soil medium for soil-foundation interaction problem, the essential geotechnical parameter on which the foundation stiffness depends, is the value of soil shear modulus. This parameter directly affects the site and structural response of the considered model under earthquake ground motions. Strain-dependent shear modulus under cycling loads makes difficult to estimate the accurate value in computation of foundation stiffness for the successful dynamic soil-structure interaction analysis. The aim of this study is to discuss in detail how to use the appropriate value of soil shear modulus in the computational analyses and to evaluate the effect of the variation in shear modulus with strain on the impedance functions used in the sub-structure method for idealizing the soil-foundation interaction problem. Herein, the impedance functions compose of springs and dashpots to represent the frequency-dependent stiffness and damping characteristics at the soil-foundation interface. Earthquake-induced vibration energy is dissipated into soil by both radiation and hysteretic damping. Therefore, flexible-base system damping, as well as the variability in shear strengths, should be considered in the calculation of impedance functions for achievement a more realistic dynamic soil-foundation interaction model. In this study, it has been written a Matlab code for addressing these purposes. The case-study example chosen for the analysis is considered as a 4-story reinforced concrete building structure located in Istanbul consisting of shear walls and moment resisting frames with a total height of 12m from the basement level. The foundation system composes of two different sized strip footings on clayey soil with different plasticity (Herein, PI=13 and 16). In the first stage of this study, the shear modulus reduction factor was not considered in the MATLAB algorithm. The static stiffness, dynamic stiffness modifiers and embedment correction factors of two rigid rectangular foundations measuring 2m wide by 17m long below the moment frames and 7m wide by 17m long below the shear walls are obtained for translation and rocking vibrational modes. Afterwards, the dynamic impedance functions of those have been calculated for reduced shear modulus through the developed Matlab code. The embedment effect of the foundation is also considered in these analyses. It can easy to see from the analysis results that the strain induced in soil will depend on the extent of the earthquake demand. It is clearly observed that when the strain range increases, the dynamic stiffness of the foundation medium decreases dramatically. The overall response of the structure can be affected considerably because of the degradation in soil stiffness even for a moderate earthquake. Therefore, it is very important to arrive at the corrected dynamic shear modulus for earthquake analysis including soil-structure interaction.

Keywords: clay soil, impedance functions, soil-foundation interaction, sub-structure approach, reduced shear modulus

Procedia PDF Downloads 246
217 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU

Authors: Ali Abdul Kadhim, Fue Lien

Abstract:

Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.

Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model

Procedia PDF Downloads 176