Search results for: circulatory model
10539 Evaluation of Water Management Options to Improve the Crop Yield and Water Productivity for Semi-Arid Watershed in Southern India Using AquaCrop Model
Authors: V. S. Manivasagam, R. Nagarajan
Abstract:
Modeling the soil, water and crop growth interactions are attaining major importance, considering the future climate change and water availability for agriculture to meet the growing food demand. Progress in understanding the crop growth response during water stress period through crop modeling approach provides an opportunity for improving and sustaining the future agriculture water use efficiency. An attempt has been made to evaluate the potential use of crop modeling approach for assessing the minimal supplementary irrigation requirement for crop growth during water limited condition and its practical significance in sustainable improvement of crop yield and water productivity. Among the numerous crop models, water driven-AquaCrop model has been chosen for the present study considering the modeling approach and water stress impact on yield simulation. The study has been evaluated in rainfed maize grown area of semi-arid Shanmuganadi watershed (a tributary of the Cauvery river system) located in southern India during the rabi cropping season (October-February). In addition to actual rainfed maize growth simulation, irrigated maize scenarios were simulated for assessing the supplementary irrigation requirement during water shortage condition for the period 2012-2015. The simulation results for rainfed maize have shown that the average maize yield of 0.5-2 t ha-1 was observed during deficit monsoon season (<350 mm) whereas 5.3 t ha-1 was noticed during sufficient monsoonal period (>350 mm). Scenario results for irrigated maize simulation during deficit monsoonal period has revealed that 150-200 mm of supplementary irrigation has ensured the 5.8 t ha-1 of irrigated maize yield. Thus, study results clearly portrayed that minimal application of supplementary irrigation during the critical growth period along with the deficit rainfall has increased the crop water productivity from 1.07 to 2.59 kg m-3 for major soil types. Overall, AquaCrop is found to be very effective for the sustainable irrigation assessment considering the model simplicity and minimal inputs requirement.Keywords: AquaCrop, crop modeling, rainfed maize, water stress
Procedia PDF Downloads 26410538 Societal Resilience Assessment in the Context of Critical Infrastructure Protection
Authors: Hannah Rosenqvist, Fanny Guay
Abstract:
Critical infrastructure protection has been an important topic for several years. Programmes such as the European Programme for Critical Infrastructure Protection (EPCIP), Critical Infrastructure Warning Information Network (CIWIN) and the European Reference Network for Critical Infrastructure Protection (ENR-CIP) have been the pillars to the work done since 2006. However, measuring critical infrastructure resilience has not been an easy task. This has to do with the fact that the concept of resilience has several definitions and is applied in different domains such as engineering and social sciences. Since June 2015, the EU project IMPROVER has been focusing on developing a methodology for implementing a combination of societal, organizational and technological resilience concepts, in the hope to increase critical infrastructure resilience. For this paper, we performed research on how to include societal resilience as a form of measurement of the context of critical infrastructure resilience. Because one of the main purposes of critical infrastructure (CI) is to deliver services to the society, we believe that societal resilience is an important factor that should be considered when assessing the overall CI resilience. We found that existing methods for CI resilience assessment focus mainly on technical aspects and therefore that is was necessary to develop a resilience model that take social factors into account. The model developed within the project IMPROVER aims to include the community’s expectations of infrastructure operators as well as information sharing with the public and planning processes. By considering such aspects, the IMPROVER framework not only helps operators to increase the resilience of their infrastructures on the technical or organizational side, but aims to strengthen community resilience as a whole. This will further be achieved by taking interdependencies between critical infrastructures into consideration. The knowledge gained during this project will enrich current European policies and practices for improved disaster risk management. The framework for societal resilience analysis is based on three dimensions for societal resilience; coping capacity, adaptive capacity and transformative capacity which are capacities that have been recognized throughout a widespread literature review in the field. A set of indicators have been defined that describe a community’s maturity within these resilience dimensions. Further, the indicators are categorized into six community assets that need to be accessible and utilized in such a way that they allow responding to changes and unforeseen circumstances. We conclude that the societal resilience model developed within the project IMPROVER can give a good indication of the level of societal resilience to critical infrastructure operators.Keywords: community resilience, critical infrastructure protection, critical infrastructure resilience, societal resilience
Procedia PDF Downloads 22910537 Intelligent Transport System: Classification of Traffic Signs Using Deep Neural Networks in Real Time
Authors: Anukriti Kumar, Tanmay Singh, Dinesh Kumar Vishwakarma
Abstract:
Traffic control has been one of the most common and irritating problems since the time automobiles have hit the roads. Problems like traffic congestion have led to a significant time burden around the world and one significant solution to these problems can be the proper implementation of the Intelligent Transport System (ITS). It involves the integration of various tools like smart sensors, artificial intelligence, position technologies and mobile data services to manage traffic flow, reduce congestion and enhance driver's ability to avoid accidents during adverse weather. Road and traffic signs’ recognition is an emerging field of research in ITS. Classification problem of traffic signs needs to be solved as it is a major step in our journey towards building semi-autonomous/autonomous driving systems. The purpose of this work focuses on implementing an approach to solve the problem of traffic sign classification by developing a Convolutional Neural Network (CNN) classifier using the GTSRB (German Traffic Sign Recognition Benchmark) dataset. Rather than using hand-crafted features, our model addresses the concern of exploding huge parameters and data method augmentations. Our model achieved an accuracy of around 97.6% which is comparable to various state-of-the-art architectures.Keywords: multiclass classification, convolution neural network, OpenCV
Procedia PDF Downloads 17410536 Strategic Asset Allocation Optimization: Enhancing Portfolio Performance Through PCA-Driven Multi-Objective Modeling
Authors: Ghita Benayad
Abstract:
Asset allocation, which affects the long-term profitability of portfolios by distributing assets to fulfill a range of investment objectives, is the cornerstone of investment management in the dynamic and complicated world of financial markets. This paper offers a technique for optimizing strategic asset allocation with the goal of improving portfolio performance by addressing the inherent complexity and uncertainty of the market through the use of Principal Component Analysis (PCA) in a multi-objective modeling framework. The study's first section starts with a critical evaluation of conventional asset allocation techniques, highlighting how poorly they are able to capture the intricate relationships between assets and the volatile nature of the market. In order to overcome these challenges, the project suggests a PCA-driven methodology that isolates important characteristics influencing asset returns by decreasing the dimensionality of the investment universe. This decrease provides a stronger basis for asset allocation decisions by facilitating a clearer understanding of market structures and behaviors. Using a multi-objective optimization model, the project builds on this foundation by taking into account a number of performance metrics at once, including risk minimization, return maximization, and the accomplishment of predetermined investment goals like regulatory compliance or sustainability standards. This model provides a more comprehensive understanding of investor preferences and portfolio performance in comparison to conventional single-objective optimization techniques. While applying the PCA-driven multi-objective optimization model to historical market data, aiming to construct portfolios better under different market situations. As compared to portfolios produced from conventional asset allocation methodologies, the results show that portfolios optimized using the proposed method display improved risk-adjusted returns, more resilience to market downturns, and better alignment with specified investment objectives. The study also looks at the implications of this PCA technique for portfolio management, including the prospect that it might give investors a more advanced framework for navigating financial markets. The findings suggest that by combining PCA with multi-objective optimization, investors may obtain a more strategic and informed asset allocation that is responsive to both market conditions and individual investment preferences. In conclusion, this capstone project improves the field of financial engineering by creating a sophisticated asset allocation optimization model that integrates PCA with multi-objective optimization. In addition to raising concerns about the condition of asset allocation today, the proposed method of portfolio management opens up new avenues for research and application in the area of investment techniques.Keywords: asset allocation, portfolio optimization, principle component analysis, multi-objective modelling, financial market
Procedia PDF Downloads 4410535 The Comparison of Joint Simulation and Estimation Methods for the Geometallurgical Modeling
Authors: Farzaneh Khorram
Abstract:
This paper endeavors to construct a block model to assess grinding energy consumption (CCE) and pinpoint blocks with the highest potential for energy usage during the grinding process within a specified region. Leveraging geostatistical techniques, particularly joint estimation, or simulation, based on geometallurgical data from various mineral processing stages, our objective is to forecast CCE across the study area. The dataset encompasses variables obtained from 2754 drill samples and a block model comprising 4680 blocks. The initial analysis encompassed exploratory data examination, variography, multivariate analysis, and the delineation of geological and structural units. Subsequent analysis involved the assessment of contacts between these units and the estimation of CCE via cokriging, considering its correlation with SPI. The selection of blocks exhibiting maximum CCE holds paramount importance for cost estimation, production planning, and risk mitigation. The study conducted exploratory data analysis on lithology, rock type, and failure variables, revealing seamless boundaries between geometallurgical units. Simulation methods, such as Plurigaussian and Turning band, demonstrated more realistic outcomes compared to cokriging, owing to the inherent characteristics of geometallurgical data and the limitations of kriging methods.Keywords: geometallurgy, multivariate analysis, plurigaussian, turning band method, cokriging
Procedia PDF Downloads 6810534 The Design and Implementation of a Calorimeter for Evaluation of the Thermal Performance of Materials: The Case of Phase Change Materials
Authors: Ebrahim Solgi, Zahra Hamedani, Behrouz Mohammad Kari, Ruwan Fernando, Henry Skates
Abstract:
The use of thermal energy storage (TES) as part of a passive design strategy can reduce a building’s energy demand. TES materials do this by increasing the lag between energy consumption and energy supply by absorbing, storing and releasing energy in a controlled manner. The increase of lightweight construction in the building industry has made it harder to utilize thermal mass. Consequently, Phase Change Materials (PCMs) are a promising alternative as they can be manufactured in thin layers and used with lightweight construction to store latent heat. This research investigates utilizing PCMs, with the first step being measuring their performance under experimental conditions. To do this requires three components. The first is a calorimeter for measuring indoor thermal conditions, the second is a pyranometer for recording the solar conditions: global, diffuse and direct radiation and the third is a data-logger for recording temperature and humidity for the studied period. This paper reports on the design and implementation of an experimental setup used to measure the thermal characteristics of PCMs as part of a wall construction. The experimental model has been simulated with the software EnergyPlus to create a reliable simulation model that warrants further investigation.Keywords: phase change materials, EnergyPlus, experimental evaluation, night ventilation
Procedia PDF Downloads 25610533 Pose-Dependency of Machine Tool Structures: Appearance, Consequences, and Challenges for Lightweight Large-Scale Machines
Authors: S. Apprich, F. Wulle, A. Lechler, A. Pott, A. Verl
Abstract:
Large-scale machine tools for the manufacturing of large work pieces, e.g. blades, casings or gears for wind turbines, feature pose-dependent dynamic behavior. Small structural damping coefficients lead to long decay times for structural vibrations that have negative impacts on the production process. Typically, these vibrations are handled by increasing the stiffness of the structure by adding mass. That is counterproductive to the needs of sustainable manufacturing as it leads to higher resource consumption both in material and in energy. Recent research activities have led to higher resource efficiency by radical mass reduction that rely on control-integrated active vibration avoidance and damping methods. These control methods depend on information describing the dynamic behavior of the controlled machine tools in order to tune the avoidance or reduction method parameters according to the current state of the machine. The paper presents the appearance, consequences and challenges of the pose-dependent dynamic behavior of lightweight large-scale machine tool structures in production. The paper starts with the theoretical introduction of the challenges of lightweight machine tool structures resulting from reduced stiffness. The statement of the pose-dependent dynamic behavior is corroborated by the results of the experimental modal analysis of a lightweight test structure. Afterwards, the consequences of the pose-dependent dynamic behavior of lightweight machine tool structures for the use of active control and vibration reduction methods are explained. Based on the state of the art on pose-dependent dynamic machine tool models and the modal investigation of an FE-model of the lightweight test structure, the criteria for a pose-dependent model for use in vibration reduction are derived. The description of the approach for a general pose-dependent model of the dynamic behavior of large lightweight machine tools that provides the necessary input to the aforementioned vibration avoidance and reduction methods to properly tackle machine vibrations is the outlook of the paper.Keywords: dynamic behavior, lightweight, machine tool, pose-dependency
Procedia PDF Downloads 45710532 Discourses in Mother Tongue-Based Classes: The Case of Hiligaynon Language
Authors: Kayla Marie Sarte
Abstract:
This study sought to describe mother tongue-based classes in the light of classroom interactional discourse using the Sinclair and Coulthard model. It specifically identified the exchanges, grouped into Teaching and Boundary types; moves, coded as Opening, Answering and Feedback; and the occurrence of the 13 acts (Bid, Cue, Nominate, Reply, React, Acknowledge, Clue, Accept, Evaluate, Loop, Comment, Starter, Conclusion, Aside and Silent Stress) in the classroom, and determined what these reveal about the teaching and learning processes in the MTB classroom. Being a qualitative study, using the Single Collective Case Within-Site (embedded) design, varied data collection procedures such as non-participant observations, audio-recordings and transcription of MTB classes, and semi-structured interviews were utilized. The results revealed the presence of all the codes in the model (except for the silent stress) which also implied that the Hiligaynon mother tongue-based class was eclectic, cultural and communicative, and had a healthy, analytical and focused environment which aligned with the aims of MTB-MLE, and affirmed the purported benefits of mother tongue teaching. Through the study, gaps in the mother tongue teaching and learning were also identified which involved the difficulty of children in memorizing Hiligaynon terms expressed in English in their homes and in the communities.Keywords: discourse analysis, language teaching and learning, mother tongue-based education, multilingualism
Procedia PDF Downloads 25910531 A Kinetic Study on Recovery of High-Purity Rutile TiO₂ Nanoparticles from Titanium Slag Using Sulfuric Acid under Sonochemical Procedure
Authors: Alireza Bahramian
Abstract:
High-purity TiO₂ nanoparticles (NPs) with size ranging between 50 nm and 100 nm are synthesized from titanium slag through sulphate route under sonochemical procedure. The effect of dissolution parameters such as the sulfuric acid/slag weight ratio, caustic soda concentration, digestion temperature and time, and initial particle size of the dried slag on the extraction efficiency of TiO₂ and removal of iron are examined. By optimizing the digestion conditions, a rutile TiO₂ powder with surface area of 42 m²/g and mean pore diameter of 22.4 nm were prepared. A thermo-kinetic analysis showed that the digestion temperature has an important effect, while the acid/slag weight ratio and initial size of the slag has a moderate effect on the dissolution rate. The shrinking-core model including both chemical surface reaction and surface diffusion is used to describe the leaching process. A low value of activation energy, 38.12 kJ/mol, indicates the surface chemical reaction model is a rate-controlling step. The kinetic analysis suggested a first order reaction mechanism with respect to the acid concentrations.Keywords: TiO₂ nanoparticles, titanium slag, dissolution rate, sonochemical method, thermo-kinetic study
Procedia PDF Downloads 25210530 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago
Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu
Abstract:
Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago
Procedia PDF Downloads 4910529 Temporal Profile of T2 MRI and 1H-MRS in the MDX Mouse Model of Duchenne Muscular Dystrophy
Authors: P. J. Sweeney, T. Ahtoniemi, J. Puoliväli, T. Laitinen, K.Lehtimäki, A. Nurmi, D. Wells
Abstract:
Duchenne muscular dystrophy (DMD) is an X-linked, lethal muscle wasting disease for which there are currently no treatment that effectively prevents the muscle necrosis and progressive muscle loss. DMD is among the most common of inherited diseases affecting around 1/3500 live male births. MDX (X-linked muscular dystrophy) mice only partially encapsulate the disease in humans and display weakness in muscles, muscle damage and edema during a period deemed the “critical period” when these mice go through cycles of muscular degeneration and regeneration. Although the MDX mutant mouse model has been extensively studied as a model for DMD, to-date an extensive temporal, non-invasive imaging profile that utilizes magnetic resonance imaging (MRI) and 1H-magnetic resonance spectroscopy (1H-MRS) has not been performed.. In addition, longitudinal imaging characterization has not coincided with attempts to exacerbate the progressive muscle damage by exercise. In this study we employed an 11.7 T small animal MRI in order to characterize the MRI and MRS profile of MDX mice longitudinally during a 12 month period during which MDX mice were subjected to exercise. Male mutant MDX mice (n=15) and male wild-type mice (n=15) were subjected to a chronic exercise regime of treadmill walking (30 min/ session) bi-weekly over the whole 12 month follow-up period. Mouse gastrocnemius and tibialis anterior muscles were profiled with baseline T2-MRI and 1H-MRS at 6 weeks of age. Imaging and spectroscopy was repeated again at 3 months, 6 months, 9 months and 12 months of age. Plasma creatine kinase (CK) level measurements were coincided with time-points for T2-MRI and 1H-MRS, but also after the “critical period” at 10 weeks of age. The results obtained from this study indicate that chronic exercise extends dystrophic phenotype of MDX mice as evidenced by T2-MRI and1H-MRS. T2-MRI revealed extent and location of the muscle damage in gastrocnemius and tibialis anterior muscles as hyperintensities (lesions and edema) in exercised MDX mice over follow-up period.. The magnitude of the muscle damage remained stable over time in exercised mice. No evident fat infiltration or cumulation to the muscle tissues was seen at any time-point in exercised MDX mice. Creatine, choline and taurine levels evaluated by 1H-MRS from the same muscles were found significantly decreased in each time-point, Extramyocellular (EMCL) and intramyocellular lipids (IMCL) did not change in exercised mice supporting the findings from anatomical T2-MRI scans for fat content. Creatine kinase levels were found to be significantly higher in exercised MDX mice during the follow-up period and importantly CK levels remained stable over the whole follow-up period. Taken together, we have described here longitudinal prophile for muscle damage and muscle metabolic changes in MDX mice subjected to chronic exercised. The extent of the muscle damage by T2-MRI was found to be stable through the follow-up period in muscles examined. In addition, metabolic profile, especially creatine, choline and taurine levels in muscles, was found to be sustained between time-points. The anatomical muscle damage evaluated by T2-MRI was supported by plasma CK levels which remained stable over the follow-up period. These findings show that non-invasive imaging and spectroscopy can be used effectively to evaluate chronic muscle pathology. These techniques can be also used to evaluate the effect of various manipulations, like here exercise, on the phenotype of the mice. Many of the findings we present here are translatable to clinical disease, such as decreased creatine, choline and taurine levels in muscles. Imaging by T2-MRI and 1H-MRS also revealed that fat content or extramyocellar and intramyocellular lipids, respectively, are not changed in MDX mice, which is in contrast to clinical manifestation of the Duchenne’s muscle dystrophy. Findings show that non-invasive imaging can be used to characterize the phenotype of a MDX model and its translatability to clinical disease, and to study events that have traditionally been not examined, like here rigorous exercise related sustained muscle damage after the “critical period”. The ability for this model to display sustained damage beyond the spontaneous “critical period“ and in turn to study drug effects on this extended phenotype will increase the value of the MDX mouse model as a tool to study therapies and treatments aimed at DMD and associated diseases.Keywords: 1H-MRS, MRI, muscular dystrophy, mouse model
Procedia PDF Downloads 35610528 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation
Authors: Ke He, Wumaier Parezhati, Haruka Yamashita
Abstract:
Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.Keywords: Doc2Vec, online marketplace, marketing, recommendation systems
Procedia PDF Downloads 11110527 Predicting Subsurface Abnormalities Growth Using Physics-Informed Neural Networks
Authors: Mehrdad Shafiei Dizaji, Hoda Azari
Abstract:
The research explores the pioneering integration of Physics-Informed Neural Networks (PINNs) into the domain of Ground-Penetrating Radar (GPR) data prediction, akin to advancements in medical imaging for tracking tumor progression in the human body. This research presents a detailed development framework for a specialized PINN model proficient at interpreting and forecasting GPR data, much like how medical imaging models predict tumor behavior. By harnessing the synergy between deep learning algorithms and the physical laws governing subsurface structures—or, in medical terms, human tissues—the model effectively embeds the physics of electromagnetic wave propagation into its architecture. This ensures that predictions not only align with fundamental physical principles but also mirror the precision needed in medical diagnostics for detecting and monitoring tumors. The suggested deep learning structure comprises three components: a CNN, a spatial feature channel attention (SFCA) mechanism, and ConvLSTM, along with temporal feature frame attention (TFFA) modules. The attention mechanism computes channel attention and temporal attention weights using self-adaptation, thereby fine-tuning the visual and temporal feature responses to extract the most pertinent and significant visual and temporal features. By integrating physics directly into the neural network, our model has shown enhanced accuracy in forecasting GPR data. This improvement is vital for conducting effective assessments of bridge deck conditions and other evaluations related to civil infrastructure. The use of Physics-Informed Neural Networks (PINNs) has demonstrated the potential to transform the field of Non-Destructive Evaluation (NDE) by enhancing the precision of infrastructure deterioration predictions. Moreover, it offers a deeper insight into the fundamental mechanisms of deterioration, viewed through the prism of physics-based models.Keywords: physics-informed neural networks, deep learning, ground-penetrating radar (GPR), NDE, ConvLSTM, physics, data driven
Procedia PDF Downloads 3710526 'How to Change Things When Change is Hard' Motivating Libyan College Students to Play an Active Role in Their Learning Process
Authors: Hameda Suwaed
Abstract:
Group work, time management and accepting others' opinions are practices rooted in the socio-political culture of democratic nations. In Libya, a country transitioning towards democracy, what is the impact of encouraging college students to use such practices in the English language classroom? How to encourage teachers to use such practices in educational system characterized by using traditional methods of teaching? Using action research and classroom research gathered data; this study investigates how teachers can use education to change their students' understanding of their roles in their society by enhancing their belonging to it. This study adjusts a model of change that includes giving students clear directions, sufficient motivation and supportive environment. These steps were applied by encouraging students to participate actively in the classroom by using group work and variety of activities. The findings of the study showed that following the suggested model can broaden students' perception of their belonging to their environment starting with their classroom and ending with their country. In conclusion, although this was a small scale study, the students' participation in the classroom shows that they gained self confidence in using practices such as group work, how to present their ideas and accepting different opinions. What was remarkable is that most students were aware that is what we need in Libya nowadays.Keywords: educational change, students' motivation, group work, foreign language teaching
Procedia PDF Downloads 42110525 Hedonic Pricing Model of Parboiled Rice
Authors: Roengchai Tansuchat, Wassanai Wattanutchariya, Aree Wiboonpongse
Abstract:
Parboiled rice is one of the most important food grains and classified in cereal and cereal product. In 2015, parboiled rice was traded more than 14.34 % of total rice trade. The major parboiled rice export countries are Thailand and India, while many countries in Africa and the Middle East such as Nigeria, South Africa, United Arab Emirates, and Saudi Arabia, are parboiled rice import countries. In the global rice market, parboiled rice pricing differs from white rice pricing because parboiled rice is semi-processing product, (soaking, steaming and drying) which affects to their color and texture. Therefore, parboiled rice export pricing does not depend only on the trade volume, length of grain, and percentage of broken rice or purity but also depend on their rice seed attributes such as color, whiteness, consistency of color and whiteness, and their texture. In addition, the parboiled rice price may depend on the country of origin, and other attributes, such as certification mark, label, packaging, and sales locations. The objectives of this paper are to study the attributes of parboiled rice sold in different countries and to evaluate the relationship between parboiled rice price in different countries and their attributes by using hedonic pricing model. These results are useful for product development, and marketing strategies development. The 141 samples of parboiled rice were collected from 5 major parboiled rice consumption countries, namely Nigeria, South Africa, Saudi Arabia, United Arab Emirates and Spain. The physicochemical properties and optical properties, namely size and shape of seed, colour (L*, a*, and b*), parboiled rice texture (hardness, adhesiveness, cohesiveness, springiness, gumminess, and chewiness), nutrition (moisture, protein, carbohydrate, fat, and ash), amylose, package, country of origin, label are considered as explanatory variables. The results from parboiled rice analysis revealed that most of samples are classified as long grain and slender. The highest average whiteness value is the parboiled rice sold in South Africa. The amylose value analysis shows that most of parboiled rice is non-glutinous rice, classified in intermediate amylose content range, and the maximum value was found in United Arab Emirates. The hedonic pricing model showed that size and shape are the key factors to determine parboiled rice price statistically significant. In parts of colour, brightness value (L*) and red-green value (a*) are statistically significant, but the yellow-blue value (b*) is insignificant. In addition, the texture attributes that significantly affect to the parboiled rice price are hardness, adhesiveness, cohesiveness, and gumminess. The findings could help both parboiled rice miller, exporter and retailers formulate better production and marketing strategies by focusing on these attributes.Keywords: hedonic pricing model, optical properties, parboiled rice, physicochemical properties
Procedia PDF Downloads 33010524 Regression of Hand Kinematics from Surface Electromyography Data Using an Long Short-Term Memory-Transformer Model
Authors: Anita Sadat Sadati Rostami, Reza Almasi Ghaleh
Abstract:
Surface electromyography (sEMG) offers important insights into muscle activation and has applications in fields including rehabilitation and human-computer interaction. The purpose of this work is to predict the degree of activation of two joints in the index finger using an LSTM-Transformer architecture trained on sEMG data from the Ninapro DB8 dataset. We apply advanced preprocessing techniques, such as multi-band filtering and customizable rectification methods, to enhance the encoding of sEMG data into features that are beneficial for regression tasks. The processed data is converted into spike patterns and simulated using Leaky Integrate-and-Fire (LIF) neuron models, allowing for neuromorphic-inspired processing. Our findings demonstrate that adjusting filtering parameters and neuron dynamics and employing the LSTM-Transformer model improves joint angle prediction performance. This study contributes to the ongoing development of deep learning frameworks for sEMG analysis, which could lead to improvements in motor control systems.Keywords: surface electromyography, LSTM-transformer, spiking neural networks, hand kinematics, leaky integrate-and-fire neuron, band-pass filtering, muscle activity decoding
Procedia PDF Downloads 210523 Troubleshooting Petroleum Equipment Based on Wireless Sensors Based on Bayesian Algorithm
Authors: Vahid Bayrami Rad
Abstract:
In this research, common methods and techniques have been investigated with a focus on intelligent fault finding and monitoring systems in the oil industry. In fact, remote and intelligent control methods are considered a necessity for implementing various operations in the oil industry, but benefiting from the knowledge extracted from countless data generated with the help of data mining algorithms. It is a avoid way to speed up the operational process for monitoring and troubleshooting in today's big oil companies. Therefore, by comparing data mining algorithms and checking the efficiency and structure and how these algorithms respond in different conditions, The proposed (Bayesian) algorithm using data clustering and their analysis and data evaluation using a colored Petri net has provided an applicable and dynamic model from the point of view of reliability and response time. Therefore, by using this method, it is possible to achieve a dynamic and consistent model of the remote control system and prevent the occurrence of leakage in oil pipelines and refineries and reduce costs and human and financial errors. Statistical data The data obtained from the evaluation process shows an increase in reliability, availability and high speed compared to other previous methods in this proposed method.Keywords: wireless sensors, petroleum equipment troubleshooting, Bayesian algorithm, colored Petri net, rapid miner, data mining-reliability
Procedia PDF Downloads 6410522 Longitudinal Vibration of a Micro-Beam in a Micro-Scale Fluid Media
Authors: M. Ghanbari, S. Hossainpour, G. Rezazadeh
Abstract:
In this paper, longitudinal vibration of a micro-beam in micro-scale fluid media has been investigated. The proposed mathematical model for this study is made up of a micro-beam and a micro-plate at its free end. An AC voltage is applied to the pair of piezoelectric layers on the upper and lower surfaces of the micro-beam in order to actuate it longitudinally. The whole structure is bounded between two fixed plates on its upper and lower surfaces. The micro-gap between the structure and the fixed plates is filled with fluid. Fluids behave differently in micro-scale than macro, so the fluid field in the gap has been modeled based on micro-polar theory. The coupled governing equations of motion of the micro-beam and the micro-scale fluid field have been derived. Due to having non-homogenous boundary conditions, derived equations have been transformed to an enhanced form with homogenous boundary conditions. Using Galerkin-based reduced order model, the enhanced equations have been discretized over the beam and fluid domains and solve simultaneously in order to obtain force response of the micro-beam. Effects of micro-polar parameters of the fluid as characteristic length scale, coupling parameter and surface parameter on the response of the micro-beam have been studied.Keywords: micro-polar theory, Galerkin method, MEMS, micro-fluid
Procedia PDF Downloads 18210521 Simultaneous Targeting of MYD88 and Nur77 as an Effective Approach for the Treatment of Inflammatory Diseases
Authors: Uzma Saqib, Mirza S. Baig
Abstract:
Myeloid differentiation primary response protein 88 (MYD88) has long been considered a central player in the inflammatory pathway. Recent studies clearly suggest that it is an important therapeutic target in inflammation. On the other hand, a recent study on the interaction between the orphan nuclear receptor (Nur77) and p38α, leading to increased lipopolysaccharide-induced hyperinflammatory response, suggests this binary complex as a therapeutic target. In this study, we have designed inhibitors that can inhibit both MYD88 and Nur77 at the same time. Since both MYD88 and Nur77 are an integral part of the pathways involving lipopolysaccharide-induced activation of NF-κB-mediated inflammation, we tried to target both proteins with the same library in order to retrieve compounds having dual inhibitory properties. To perform this, we developed a homodimeric model of MYD88 and, along with the crystal structure of Nur77, screened a virtual library of compounds from the traditional Chinese medicine database containing ~61,000 compounds. We analyzed the resulting hits for their efficacy for dual binding and probed them for developing a common pharmacophore model that could be used as a prototype to screen compound libraries as well as to guide combinatorial library design to search for ideal dual-target inhibitors. Thus, our study explores the identification of novel leads having dual inhibiting effects due to binding to both MYD88 and Nur77 targets.Keywords: drug design, Nur77, MYD88, inflammation
Procedia PDF Downloads 30210520 Does Citizens’ Involvement Always Improve Outcomes: Procedures, Incentives and Comparative Advantages of Public and Private Law Enforcement
Authors: Avdasheva Svetlanaa, Kryuchkova Polinab
Abstract:
Comparative social efficiency of private and public enforcement of law is debated. This question is not of academic interest only, it is also important for the development of the legal system and regulations. Generally, involvement of ‘common citizens’ in public law enforcement is considered to be beneficial, while involvement of interest groups representatives is not. Institutional economics as well as law and economics consider the difference between public and private enforcement to be rather mechanical. Actions of bureaucrats in government agencies are assumed to be driven by the incentives linked to social welfare (or other indicator of public interest) and their own benefits. In contrast, actions of participants in private enforcement are driven by their private benefits. However administrative law enforcement may be designed in such a way that it would become driven mainly by individual incentives of alleged victims. We refer to this system as reactive public enforcement. Citizens may prefer using reactive public enforcement even if private enforcement is available. However replacement of public enforcement by reactive version of public enforcement negatively affects deterrence and reduces social welfare. We illustrate the problem of private vs pure public and private vs reactive public enforcement models with the examples of three legislation subsystems in Russia – labor law, consumer protection law and competition law. While development of private enforcement instead of public (especially in reactive public model) is desirable, replacement of both public and private enforcement by reactive model is definitely not.Keywords: public enforcement, private complaints, legal errors, competition protection, labor law, competition law, russia
Procedia PDF Downloads 49410519 Aggregation Scheduling Algorithms in Wireless Sensor Networks
Authors: Min Kyung An
Abstract:
In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.Keywords: data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional
Procedia PDF Downloads 22810518 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology
Authors: Sanjeev Kumar Appicharla
Abstract:
This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach
Procedia PDF Downloads 18810517 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour
Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani
Abstract:
In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.Keywords: video tracking, particle filter, greedy snake, neural network
Procedia PDF Downloads 34010516 Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation
Authors: Arian Hosseini, Mahmudul Hasan
Abstract:
To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy.Keywords: deep classification, content moderation, ensemble learning, explosion detection, video processing
Procedia PDF Downloads 5210515 Autonomic Sonar Sensor Fault Manager for Mobile Robots
Authors: Martin Doran, Roy Sterritt, George Wilkie
Abstract:
NASA, ESA, and NSSC space agencies have plans to put planetary rovers on Mars in 2020. For these future planetary rovers to succeed, they will heavily depend on sensors to detect obstacles. This will also become of vital importance in the future, if rovers become less dependent on commands received from earth-based control and more dependent on self-configuration and self-decision making. These planetary rovers will face harsh environments and the possibility of hardware failure is high, as seen in missions from the past. In this paper, we focus on using Autonomic principles where self-healing, self-optimization, and self-adaption are explored using the MAPE-K model and expanding this model to encapsulate the attributes such as Awareness, Analysis, and Adjustment (AAA-3). In the experimentation, a Pioneer P3-DX research robot is used to simulate a planetary rover. The sonar sensors on the P3-DX robot are used to simulate the sensors on a planetary rover (even though in reality, sonar sensors cannot operate in a vacuum). Experiments using the P3-DX robot focus on how our software system can be adapted with the loss of sonar sensor functionality. The autonomic manager system is responsible for the decision making on how to make use of remaining ‘enabled’ sonars sensors to compensate for those sonar sensors that are ‘disabled’. The key to this research is that the robot can still detect objects even with reduced sonar sensor capability.Keywords: autonomic, self-adaption, self-healing, self-optimization
Procedia PDF Downloads 34710514 Hygro-Thermal Modelling of Timber Decks
Authors: Stefania Fortino, Petr Hradil, Timo Avikainen
Abstract:
Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM
Procedia PDF Downloads 17410513 An Evaluation of Solubility of Wax and Asphaltene in Crude Oil for Improved Flow Properties Using a Copolymer Solubilized in Organic Solvent with an Aromatic Hydrocarbon
Authors: S. M. Anisuzzaman, Sariah Abang, Awang Bono, D. Krishnaiah, N. M. Ismail, G. B. Sandrison
Abstract:
Wax and asphaltene are high molecular weighted compounds that contribute to the stability of crude oil at a dispersed state. Transportation of crude oil along pipelines from the oil rig to the refineries causes fluctuation of temperature which will lead to the coagulation of wax and flocculation of asphaltenes. This paper focuses on the prevention of wax and asphaltene precipitate deposition on the inner surface of the pipelines by using a wax inhibitor and an asphaltene dispersant. The novelty of this prevention method is the combination of three substances; a wax inhibitor dissolved in a wax inhibitor solvent and an asphaltene solvent, namely, ethylene-vinyl acetate (EVA) copolymer dissolved in methylcyclohexane (MCH) and toluene (TOL) to inhibit the precipitation and deposition of wax and asphaltene. The objective of this paper was to optimize the percentage composition of each component in this inhibitor which can maximize the viscosity reduction of crude oil. The optimization was divided into two stages which are the laboratory experimental stage in which the viscosity of crude oil samples containing inhibitor of different component compositions is tested at decreasing temperatures and the data optimization stage using response surface methodology (RSM) to design an optimizing model. The results of experiment proved that the combination of 50% EVA + 25% MCH + 25% TOL gave a maximum viscosity reduction of 67% while the RSM model proved that the combination of 57% EVA + 20.5% MCH + 22.5% TOL gave a maximum viscosity reduction of up to 61%.Keywords: asphaltene, ethylene-vinyl acetate, methylcyclohexane, toluene, wax
Procedia PDF Downloads 41310512 Inventory Management System of Seasonal Raw Materials of Feeds at San Jose Batangas through Integer Linear Programming and VBA
Authors: Glenda Marie D. Balitaan
Abstract:
The branch of business management that deals with inventory planning and control is known as inventory management. It comprises keeping track of supply levels and forecasting demand, as well as scheduling when and how to plan. Keeping excess inventory results in a loss of money, takes up physical space, and raises the risk of damage, spoilage, and loss. On the other hand, too little inventory frequently causes operations to be disrupted and raises the possibility of low customer satisfaction, both of which can be detrimental to a company's reputation. The United Victorious Feed mill Corporation's present inventory management practices were assessed in terms of inventory level, warehouse allocation, ordering frequency, shelf life, and production requirement. To help the company achieve their optimal level of inventory, a mathematical model was created using Integer Linear Programming. Due to the season, the goal function was to reduce the cost of purchasing US Soya and Yellow Corn. Warehouse space, annual production requirements, and shelf life were all considered. To ensure that the user only uses one application to record all relevant information, like production output and delivery, the researcher built a Visual Basic system. Additionally, the technology allows management to change the model's parameters.Keywords: inventory management, integer linear programming, inventory management system, feed mill
Procedia PDF Downloads 8110511 Estimating Algae Concentration Based on Deep Learning from Satellite Observation in Korea
Authors: Heewon Jeong, Seongpyo Kim, Joon Ha Kim
Abstract:
Over the last few tens of years, the coastal regions of Korea have experienced red tide algal blooms, which are harmful and toxic to both humans and marine organisms due to their potential threat. It was accelerated owing to eutrophication by human activities, certain oceanic processes, and climate change. Previous studies have tried to monitoring and predicting the algae concentration of the ocean with the bio-optical algorithms applied to color images of the satellite. However, the accurate estimation of algal blooms remains problems to challenges because of the complexity of coastal waters. Therefore, this study suggests a new method to identify the concentration of red tide algal bloom from images of geostationary ocean color imager (GOCI) which are representing the water environment of the sea in Korea. The method employed GOCI images, which took the water leaving radiances centered at 443nm, 490nm and 660nm respectively, as well as observed weather data (i.e., humidity, temperature and atmospheric pressure) for the database to apply optical characteristics of algae and train deep learning algorithm. Convolution neural network (CNN) was used to extract the significant features from the images. And then artificial neural network (ANN) was used to estimate the concentration of algae from the extracted features. For training of the deep learning model, backpropagation learning strategy is developed. The established methods were tested and compared with the performances of GOCI data processing system (GDPS), which is based on standard image processing algorithms and optical algorithms. The model had better performance to estimate algae concentration than the GDPS which is impossible to estimate greater than 5mg/m³. Thus, deep learning model trained successfully to assess algae concentration in spite of the complexity of water environment. Furthermore, the results of this system and methodology can be used to improve the performances of remote sensing. Acknowledgement: This work was supported by the 'Climate Technology Development and Application' research project (#K07731) through a grant provided by GIST in 2017.Keywords: deep learning, algae concentration, remote sensing, satellite
Procedia PDF Downloads 18210510 Modeling Continuous Flow in a Curved Channel Using Smoothed Particle Hydrodynamics
Authors: Indri Mahadiraka Rumamby, R. R. Dwinanti Rika Marthanty, Jessica Sjah
Abstract:
Smoothed particle hydrodynamics (SPH) was originally created to simulate nonaxisymmetric phenomena in astrophysics. However, this method still has several shortcomings, namely the high computational cost required to model values with high resolution and problems with boundary conditions. The difficulty of modeling boundary conditions occurs because the SPH method is influenced by particle deficiency due to the integral of the kernel function being truncated by boundary conditions. This research aims to answer if SPH modeling with a focus on boundary layer interactions and continuous flow can produce quantifiably accurate values with low computational cost. This research will combine algorithms and coding in the main program of meandering river, continuous flow algorithm, and solid-fluid algorithm with the aim of obtaining quantitatively accurate results on solid-fluid interactions with the continuous flow on a meandering channel using the SPH method. This study uses the Fortran programming language for modeling the SPH (Smoothed Particle Hydrodynamics) numerical method; the model is conducted in the form of a U-shaped meandering open channel in 3D, where the channel walls are soil particles and uses a continuous flow with a limited number of particles.Keywords: smoothed particle hydrodynamics, computational fluid dynamics, numerical simulation, fluid mechanics
Procedia PDF Downloads 128