Search results for: interlaminar damage model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 18305

Search results for: interlaminar damage model

10475 Antibacterial Studies on Cellulolytic Bacteria for Termite Control

Authors: Essam A. Makky, Chan Cai Wen, Muna Jalal, Mashitah M. Yusoff

Abstract:

Termites are considered as important pests that could cause severe wood damage and economic losses in urban, agriculture and forest of Malaysia. The ability of termites to degrade cellulose depends on association of gut cellulolytic microflora or better known as mutual symbionts. With the idea of disrupting the mutual symbiotic association, better pest control practices can be attained. This study is aimed to isolate cellulolytic bacteria from the gut of termites and carry out antibacterial studies for the termite. Confirmation of cellulase activity is done by qualitative and quantitative methods. Impacts of antibiotics and their combinations, as well as heavy metals and disinfectants, are conducted by using disc diffusion method. Effective antibacterial agents are then subjected for termite treatment to study the effectiveness of the agents as termiticides. 24 cellulolytic bacteria are isolated, purified and screened from the gut of termites. All isolates were identified as Gram-negative with either rod or cocci in shape. For antibacterial studies result, isolates were found to be 100% sensitive to 4 antibiotics (rifampicin, tetracycline, gentamycin, and neomycin), 2 heavy metals (cadmium and mercury) and 3 disinfectants (lactic acid, formalin, and hydrogen peroxide). 22 out of 36 antibiotic combinations showed synergistic effect while 15 antibiotic combinations showed an antagonistic effect on isolates. The 2 heavy metals and 3 disinfectants that showed 100% effectiveness, as well as 22 antibiotic combinations, that showed synergistic effect were used for termite control. Among the 27 selected antibacterial agents, 12 of them were found to be effective to kill all the termites within 1 to 6 days. Mercury, lactic acid, formalin and hydrogen peroxide were found to be the most effective termiticides in which all termites were killed within 1 day only. These effective antibacterial agents possess a great potential to be a new application to control the termite pest species in the future.

Keywords: antibacterial, cellulase, termicide, termites

Procedia PDF Downloads 456
10474 Using Crowdsourced Data to Assess Safety in Developing Countries, The Case Study of Eastern Cairo, Egypt

Authors: Mahmoud Ahmed Farrag, Ali Zain Elabdeen Heikal, Mohamed Shawky Ahmed, Ahmed Osama Amer

Abstract:

Crowdsourced data refers to data that is collected and shared by a large number of individuals or organizations, often through the use of digital technologies such as mobile devices and social media. The shortage in crash data collection in developing countries makes it difficult to fully understand and address road safety issues in these regions. In developing countries, crowdsourced data can be a valuable tool for improving road safety, particularly in urban areas where the majority of road crashes occur. This study is the first to develop safety performance functions using crowdsourced data by adopting a negative binomial structure model and Full Bayes model to investigate traffic safety for urban road networks and provide insights into the impact of roadway characteristics. Furthermore, as a part of the safety management process, network screening has been undergone through applying two different methods to rank the most hazardous road segments: PCR method (adopted in the Highway Capacity Manual HCM) as well as a graphical method using GIS tools to compare and validate. Lastly, recommendations were suggested for policymakers to ensure safer roads.

Keywords: crowdsourced data, road crashes, safety performance functions, Full Bayes models, network screening

Procedia PDF Downloads 13
10473 Evaluation of Water Management Options to Improve the Crop Yield and Water Productivity for Semi-Arid Watershed in Southern India Using AquaCrop Model

Authors: V. S. Manivasagam, R. Nagarajan

Abstract:

Modeling the soil, water and crop growth interactions are attaining major importance, considering the future climate change and water availability for agriculture to meet the growing food demand. Progress in understanding the crop growth response during water stress period through crop modeling approach provides an opportunity for improving and sustaining the future agriculture water use efficiency. An attempt has been made to evaluate the potential use of crop modeling approach for assessing the minimal supplementary irrigation requirement for crop growth during water limited condition and its practical significance in sustainable improvement of crop yield and water productivity. Among the numerous crop models, water driven-AquaCrop model has been chosen for the present study considering the modeling approach and water stress impact on yield simulation. The study has been evaluated in rainfed maize grown area of semi-arid Shanmuganadi watershed (a tributary of the Cauvery river system) located in southern India during the rabi cropping season (October-February). In addition to actual rainfed maize growth simulation, irrigated maize scenarios were simulated for assessing the supplementary irrigation requirement during water shortage condition for the period 2012-2015. The simulation results for rainfed maize have shown that the average maize yield of 0.5-2 t ha-1 was observed during deficit monsoon season (<350 mm) whereas 5.3 t ha-1 was noticed during sufficient monsoonal period (>350 mm). Scenario results for irrigated maize simulation during deficit monsoonal period has revealed that 150-200 mm of supplementary irrigation has ensured the 5.8 t ha-1 of irrigated maize yield. Thus, study results clearly portrayed that minimal application of supplementary irrigation during the critical growth period along with the deficit rainfall has increased the crop water productivity from 1.07 to 2.59 kg m-3 for major soil types. Overall, AquaCrop is found to be very effective for the sustainable irrigation assessment considering the model simplicity and minimal inputs requirement.

Keywords: AquaCrop, crop modeling, rainfed maize, water stress

Procedia PDF Downloads 251
10472 Societal Resilience Assessment in the Context of Critical Infrastructure Protection

Authors: Hannah Rosenqvist, Fanny Guay

Abstract:

Critical infrastructure protection has been an important topic for several years. Programmes such as the European Programme for Critical Infrastructure Protection (EPCIP), Critical Infrastructure Warning Information Network (CIWIN) and the European Reference Network for Critical Infrastructure Protection (ENR-CIP) have been the pillars to the work done since 2006. However, measuring critical infrastructure resilience has not been an easy task. This has to do with the fact that the concept of resilience has several definitions and is applied in different domains such as engineering and social sciences. Since June 2015, the EU project IMPROVER has been focusing on developing a methodology for implementing a combination of societal, organizational and technological resilience concepts, in the hope to increase critical infrastructure resilience. For this paper, we performed research on how to include societal resilience as a form of measurement of the context of critical infrastructure resilience. Because one of the main purposes of critical infrastructure (CI) is to deliver services to the society, we believe that societal resilience is an important factor that should be considered when assessing the overall CI resilience. We found that existing methods for CI resilience assessment focus mainly on technical aspects and therefore that is was necessary to develop a resilience model that take social factors into account. The model developed within the project IMPROVER aims to include the community’s expectations of infrastructure operators as well as information sharing with the public and planning processes. By considering such aspects, the IMPROVER framework not only helps operators to increase the resilience of their infrastructures on the technical or organizational side, but aims to strengthen community resilience as a whole. This will further be achieved by taking interdependencies between critical infrastructures into consideration. The knowledge gained during this project will enrich current European policies and practices for improved disaster risk management. The framework for societal resilience analysis is based on three dimensions for societal resilience; coping capacity, adaptive capacity and transformative capacity which are capacities that have been recognized throughout a widespread literature review in the field. A set of indicators have been defined that describe a community’s maturity within these resilience dimensions. Further, the indicators are categorized into six community assets that need to be accessible and utilized in such a way that they allow responding to changes and unforeseen circumstances. We conclude that the societal resilience model developed within the project IMPROVER can give a good indication of the level of societal resilience to critical infrastructure operators.

Keywords: community resilience, critical infrastructure protection, critical infrastructure resilience, societal resilience

Procedia PDF Downloads 214
10471 Intelligent Transport System: Classification of Traffic Signs Using Deep Neural Networks in Real Time

Authors: Anukriti Kumar, Tanmay Singh, Dinesh Kumar Vishwakarma

Abstract:

Traffic control has been one of the most common and irritating problems since the time automobiles have hit the roads. Problems like traffic congestion have led to a significant time burden around the world and one significant solution to these problems can be the proper implementation of the Intelligent Transport System (ITS). It involves the integration of various tools like smart sensors, artificial intelligence, position technologies and mobile data services to manage traffic flow, reduce congestion and enhance driver's ability to avoid accidents during adverse weather. Road and traffic signs’ recognition is an emerging field of research in ITS. Classification problem of traffic signs needs to be solved as it is a major step in our journey towards building semi-autonomous/autonomous driving systems. The purpose of this work focuses on implementing an approach to solve the problem of traffic sign classification by developing a Convolutional Neural Network (CNN) classifier using the GTSRB (German Traffic Sign Recognition Benchmark) dataset. Rather than using hand-crafted features, our model addresses the concern of exploding huge parameters and data method augmentations. Our model achieved an accuracy of around 97.6% which is comparable to various state-of-the-art architectures.

Keywords: multiclass classification, convolution neural network, OpenCV

Procedia PDF Downloads 159
10470 When Conducting an Analysis of Workplace Incidents, It Is Imperative to Meticulously Calculate Both the Frequency and Severity of Injuries Sustain

Authors: Arash Yousefi

Abstract:

Experts suggest that relying exclusively on parameters to convey a situation or establish a condition may not be adequate. Assessing and appraising incidents in a system based on accident parameters, such as accident frequency, lost workdays, or fatalities, may not always be precise and occasionally erroneous. The frequency rate of accidents is a metric that assesses the correlation between the number of accidents causing work-time loss due to injuries and the total working hours of personnel over a year. Traditionally, this has been calculated based on one million working hours, but the American Occupational Safety and Health Organization has updated its standards. The new coefficient of 200/000 working hours is now used to compute the frequency rate of accidents. It's crucial to ensure that the total working hours of employees are equally represented when calculating individual event and incident numbers. The accident severity rate is a metric used to determine the amount of time lost or wasted during a given period, often a year, in relation to the total number of working hours. It measures the percentage of work hours lost or wasted compared to the total number of useful working hours, which provides valuable insight into the number of days lost or wasted due to work-related incidents for each working hour. Calculating the severity of an incident can be difficult if a worker suffers permanent disability or death. To determine lost days, coefficients specified in the "tables of days equivalent to OSHA or ANSI standards" for disabling injuries are used. The accident frequency coefficient denotes the rate at which accidents occur, while the accident severity coefficient specifies the extent of damage and injury caused by these accidents. These coefficients are crucial in accurately assessing the magnitude and impact of accidents.

Keywords: incidents, safety, analysis, frequency, severity, injuries, determine

Procedia PDF Downloads 76
10469 Strategic Asset Allocation Optimization: Enhancing Portfolio Performance Through PCA-Driven Multi-Objective Modeling

Authors: Ghita Benayad

Abstract:

Asset allocation, which affects the long-term profitability of portfolios by distributing assets to fulfill a range of investment objectives, is the cornerstone of investment management in the dynamic and complicated world of financial markets. This paper offers a technique for optimizing strategic asset allocation with the goal of improving portfolio performance by addressing the inherent complexity and uncertainty of the market through the use of Principal Component Analysis (PCA) in a multi-objective modeling framework. The study's first section starts with a critical evaluation of conventional asset allocation techniques, highlighting how poorly they are able to capture the intricate relationships between assets and the volatile nature of the market. In order to overcome these challenges, the project suggests a PCA-driven methodology that isolates important characteristics influencing asset returns by decreasing the dimensionality of the investment universe. This decrease provides a stronger basis for asset allocation decisions by facilitating a clearer understanding of market structures and behaviors. Using a multi-objective optimization model, the project builds on this foundation by taking into account a number of performance metrics at once, including risk minimization, return maximization, and the accomplishment of predetermined investment goals like regulatory compliance or sustainability standards. This model provides a more comprehensive understanding of investor preferences and portfolio performance in comparison to conventional single-objective optimization techniques. While applying the PCA-driven multi-objective optimization model to historical market data, aiming to construct portfolios better under different market situations. As compared to portfolios produced from conventional asset allocation methodologies, the results show that portfolios optimized using the proposed method display improved risk-adjusted returns, more resilience to market downturns, and better alignment with specified investment objectives. The study also looks at the implications of this PCA technique for portfolio management, including the prospect that it might give investors a more advanced framework for navigating financial markets. The findings suggest that by combining PCA with multi-objective optimization, investors may obtain a more strategic and informed asset allocation that is responsive to both market conditions and individual investment preferences. In conclusion, this capstone project improves the field of financial engineering by creating a sophisticated asset allocation optimization model that integrates PCA with multi-objective optimization. In addition to raising concerns about the condition of asset allocation today, the proposed method of portfolio management opens up new avenues for research and application in the area of investment techniques.

Keywords: asset allocation, portfolio optimization, principle component analysis, multi-objective modelling, financial market

Procedia PDF Downloads 31
10468 The Comparison of Joint Simulation and Estimation Methods for the Geometallurgical Modeling

Authors: Farzaneh Khorram

Abstract:

This paper endeavors to construct a block model to assess grinding energy consumption (CCE) and pinpoint blocks with the highest potential for energy usage during the grinding process within a specified region. Leveraging geostatistical techniques, particularly joint estimation, or simulation, based on geometallurgical data from various mineral processing stages, our objective is to forecast CCE across the study area. The dataset encompasses variables obtained from 2754 drill samples and a block model comprising 4680 blocks. The initial analysis encompassed exploratory data examination, variography, multivariate analysis, and the delineation of geological and structural units. Subsequent analysis involved the assessment of contacts between these units and the estimation of CCE via cokriging, considering its correlation with SPI. The selection of blocks exhibiting maximum CCE holds paramount importance for cost estimation, production planning, and risk mitigation. The study conducted exploratory data analysis on lithology, rock type, and failure variables, revealing seamless boundaries between geometallurgical units. Simulation methods, such as Plurigaussian and Turning band, demonstrated more realistic outcomes compared to cokriging, owing to the inherent characteristics of geometallurgical data and the limitations of kriging methods.

Keywords: geometallurgy, multivariate analysis, plurigaussian, turning band method, cokriging

Procedia PDF Downloads 41
10467 The Design and Implementation of a Calorimeter for Evaluation of the Thermal Performance of Materials: The Case of Phase Change Materials

Authors: Ebrahim Solgi, Zahra Hamedani, Behrouz Mohammad Kari, Ruwan Fernando, Henry Skates

Abstract:

The use of thermal energy storage (TES) as part of a passive design strategy can reduce a building’s energy demand. TES materials do this by increasing the lag between energy consumption and energy supply by absorbing, storing and releasing energy in a controlled manner. The increase of lightweight construction in the building industry has made it harder to utilize thermal mass. Consequently, Phase Change Materials (PCMs) are a promising alternative as they can be manufactured in thin layers and used with lightweight construction to store latent heat. This research investigates utilizing PCMs, with the first step being measuring their performance under experimental conditions. To do this requires three components. The first is a calorimeter for measuring indoor thermal conditions, the second is a pyranometer for recording the solar conditions: global, diffuse and direct radiation and the third is a data-logger for recording temperature and humidity for the studied period. This paper reports on the design and implementation of an experimental setup used to measure the thermal characteristics of PCMs as part of a wall construction. The experimental model has been simulated with the software EnergyPlus to create a reliable simulation model that warrants further investigation.

Keywords: phase change materials, EnergyPlus, experimental evaluation, night ventilation

Procedia PDF Downloads 239
10466 Pose-Dependency of Machine Tool Structures: Appearance, Consequences, and Challenges for Lightweight Large-Scale Machines

Authors: S. Apprich, F. Wulle, A. Lechler, A. Pott, A. Verl

Abstract:

Large-scale machine tools for the manufacturing of large work pieces, e.g. blades, casings or gears for wind turbines, feature pose-dependent dynamic behavior. Small structural damping coefficients lead to long decay times for structural vibrations that have negative impacts on the production process. Typically, these vibrations are handled by increasing the stiffness of the structure by adding mass. That is counterproductive to the needs of sustainable manufacturing as it leads to higher resource consumption both in material and in energy. Recent research activities have led to higher resource efficiency by radical mass reduction that rely on control-integrated active vibration avoidance and damping methods. These control methods depend on information describing the dynamic behavior of the controlled machine tools in order to tune the avoidance or reduction method parameters according to the current state of the machine. The paper presents the appearance, consequences and challenges of the pose-dependent dynamic behavior of lightweight large-scale machine tool structures in production. The paper starts with the theoretical introduction of the challenges of lightweight machine tool structures resulting from reduced stiffness. The statement of the pose-dependent dynamic behavior is corroborated by the results of the experimental modal analysis of a lightweight test structure. Afterwards, the consequences of the pose-dependent dynamic behavior of lightweight machine tool structures for the use of active control and vibration reduction methods are explained. Based on the state of the art on pose-dependent dynamic machine tool models and the modal investigation of an FE-model of the lightweight test structure, the criteria for a pose-dependent model for use in vibration reduction are derived. The description of the approach for a general pose-dependent model of the dynamic behavior of large lightweight machine tools that provides the necessary input to the aforementioned vibration avoidance and reduction methods to properly tackle machine vibrations is the outlook of the paper.

Keywords: dynamic behavior, lightweight, machine tool, pose-dependency

Procedia PDF Downloads 442
10465 Discourses in Mother Tongue-Based Classes: The Case of Hiligaynon Language

Authors: Kayla Marie Sarte

Abstract:

This study sought to describe mother tongue-based classes in the light of classroom interactional discourse using the Sinclair and Coulthard model. It specifically identified the exchanges, grouped into Teaching and Boundary types; moves, coded as Opening, Answering and Feedback; and the occurrence of the 13 acts (Bid, Cue, Nominate, Reply, React, Acknowledge, Clue, Accept, Evaluate, Loop, Comment, Starter, Conclusion, Aside and Silent Stress) in the classroom, and determined what these reveal about the teaching and learning processes in the MTB classroom. Being a qualitative study, using the Single Collective Case Within-Site (embedded) design, varied data collection procedures such as non-participant observations, audio-recordings and transcription of MTB classes, and semi-structured interviews were utilized. The results revealed the presence of all the codes in the model (except for the silent stress) which also implied that the Hiligaynon mother tongue-based class was eclectic, cultural and communicative, and had a healthy, analytical and focused environment which aligned with the aims of MTB-MLE, and affirmed the purported benefits of mother tongue teaching. Through the study, gaps in the mother tongue teaching and learning were also identified which involved the difficulty of children in memorizing Hiligaynon terms expressed in English in their homes and in the communities.

Keywords: discourse analysis, language teaching and learning, mother tongue-based education, multilingualism

Procedia PDF Downloads 246
10464 A Kinetic Study on Recovery of High-Purity Rutile TiO₂ Nanoparticles from Titanium Slag Using Sulfuric Acid under Sonochemical Procedure

Authors: Alireza Bahramian

Abstract:

High-purity TiO₂ nanoparticles (NPs) with size ranging between 50 nm and 100 nm are synthesized from titanium slag through sulphate route under sonochemical procedure. The effect of dissolution parameters such as the sulfuric acid/slag weight ratio, caustic soda concentration, digestion temperature and time, and initial particle size of the dried slag on the extraction efficiency of TiO₂ and removal of iron are examined. By optimizing the digestion conditions, a rutile TiO₂ powder with surface area of 42 m²/g and mean pore diameter of 22.4 nm were prepared. A thermo-kinetic analysis showed that the digestion temperature has an important effect, while the acid/slag weight ratio and initial size of the slag has a moderate effect on the dissolution rate. The shrinking-core model including both chemical surface reaction and surface diffusion is used to describe the leaching process. A low value of activation energy, 38.12 kJ/mol, indicates the surface chemical reaction model is a rate-controlling step. The kinetic analysis suggested a first order reaction mechanism with respect to the acid concentrations.

Keywords: TiO₂ nanoparticles, titanium slag, dissolution rate, sonochemical method, thermo-kinetic study

Procedia PDF Downloads 245
10463 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago

Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu

Abstract:

Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.

Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago

Procedia PDF Downloads 25
10462 Analysis Model for the Relationship of Users, Products, and Stores on Online Marketplace Based on Distributed Representation

Authors: Ke He, Wumaier Parezhati, Haruka Yamashita

Abstract:

Recently, online marketplaces in the e-commerce industry, such as Rakuten and Alibaba, have become some of the most popular online marketplaces in Asia. In these shopping websites, consumers can select purchase products from a large number of stores. Additionally, consumers of the e-commerce site have to register their name, age, gender, and other information in advance, to access their registered account. Therefore, establishing a method for analyzing consumer preferences from both the store and the product side is required. This study uses the Doc2Vec method, which has been studied in the field of natural language processing. Doc2Vec has been used in many cases to analyze the extraction of semantic relationships between documents (represented as consumers) and words (represented as products) in the field of document classification. This concept is applicable to represent the relationship between users and items; however, the problem is that one more factor (i.e., shops) needs to be considered in Doc2Vec. More precisely, a method for analyzing the relationship between consumers, stores, and products is required. The purpose of our study is to combine the analysis of the Doc2vec model for users and shops, and for users and items in the same feature space. This method enables the calculation of similar shops and items for each user. In this study, we derive the real data analysis accumulated in the online marketplace and demonstrate the efficiency of the proposal.

Keywords: Doc2Vec, online marketplace, marketing, recommendation systems

Procedia PDF Downloads 103
10461 Predicting Subsurface Abnormalities Growth Using Physics-Informed Neural Networks

Authors: Mehrdad Shafiei Dizaji, Hoda Azari

Abstract:

The research explores the pioneering integration of Physics-Informed Neural Networks (PINNs) into the domain of Ground-Penetrating Radar (GPR) data prediction, akin to advancements in medical imaging for tracking tumor progression in the human body. This research presents a detailed development framework for a specialized PINN model proficient at interpreting and forecasting GPR data, much like how medical imaging models predict tumor behavior. By harnessing the synergy between deep learning algorithms and the physical laws governing subsurface structures—or, in medical terms, human tissues—the model effectively embeds the physics of electromagnetic wave propagation into its architecture. This ensures that predictions not only align with fundamental physical principles but also mirror the precision needed in medical diagnostics for detecting and monitoring tumors. The suggested deep learning structure comprises three components: a CNN, a spatial feature channel attention (SFCA) mechanism, and ConvLSTM, along with temporal feature frame attention (TFFA) modules. The attention mechanism computes channel attention and temporal attention weights using self-adaptation, thereby fine-tuning the visual and temporal feature responses to extract the most pertinent and significant visual and temporal features. By integrating physics directly into the neural network, our model has shown enhanced accuracy in forecasting GPR data. This improvement is vital for conducting effective assessments of bridge deck conditions and other evaluations related to civil infrastructure. The use of Physics-Informed Neural Networks (PINNs) has demonstrated the potential to transform the field of Non-Destructive Evaluation (NDE) by enhancing the precision of infrastructure deterioration predictions. Moreover, it offers a deeper insight into the fundamental mechanisms of deterioration, viewed through the prism of physics-based models.

Keywords: physics-informed neural networks, deep learning, ground-penetrating radar (GPR), NDE, ConvLSTM, physics, data driven

Procedia PDF Downloads 8
10460 'How to Change Things When Change is Hard' Motivating Libyan College Students to Play an Active Role in Their Learning Process

Authors: Hameda Suwaed

Abstract:

Group work, time management and accepting others' opinions are practices rooted in the socio-political culture of democratic nations. In Libya, a country transitioning towards democracy, what is the impact of encouraging college students to use such practices in the English language classroom? How to encourage teachers to use such practices in educational system characterized by using traditional methods of teaching? Using action research and classroom research gathered data; this study investigates how teachers can use education to change their students' understanding of their roles in their society by enhancing their belonging to it. This study adjusts a model of change that includes giving students clear directions, sufficient motivation and supportive environment. These steps were applied by encouraging students to participate actively in the classroom by using group work and variety of activities. The findings of the study showed that following the suggested model can broaden students' perception of their belonging to their environment starting with their classroom and ending with their country. In conclusion, although this was a small scale study, the students' participation in the classroom shows that they gained self confidence in using practices such as group work, how to present their ideas and accepting different opinions. What was remarkable is that most students were aware that is what we need in Libya nowadays.

Keywords: educational change, students' motivation, group work, foreign language teaching

Procedia PDF Downloads 404
10459 Optimization of Acid Treatments by Assessing Diversion Strategies in Carbonate and Sandstone Formations

Authors: Ragi Poyyara, Vijaya Patnana, Mohammed Alam

Abstract:

When acid is pumped into damaged reservoirs for damage removal/stimulation, distorted inflow of acid into the formation occurs caused by acid preferentially traveling into highly permeable regions over low permeable regions, or (in general) into the path of least resistance. This can lead to poor zonal coverage and hence warrants diversion to carry out an effective placement of acid. Diversion is desirably a reversible technique of temporarily reducing the permeability of high perm zones, thereby forcing the acid into lower perm zones. The uniqueness of each reservoir can pose several challenges to engineers attempting to devise optimum and effective diversion strategies. Diversion techniques include mechanical placement and/or chemical diversion of treatment fluids, further sub-classified into ball sealers, bridge plugs, packers, particulate diverters, viscous gels, crosslinked gels, relative permeability modifiers (RPMs), foams, and/or the use of placement techniques, such as coiled tubing (CT) and the maximum pressure difference and injection rate (MAPDIR) methodology. It is not always realized that the effectiveness of diverters greatly depends on reservoir properties, such as formation type, temperature, reservoir permeability, heterogeneity, and physical well characteristics (e.g., completion type, well deviation, length of treatment interval, multiple intervals, etc.). This paper reviews the mechanisms by which each variety of diverter functions and discusses the effect of various reservoir properties on the efficiency of diversion techniques. Guidelines are recommended to help enhance productivity from zones of interest by choosing the best methods of diversion while pumping an optimized amount of treatment fluid. The success of an overall acid treatment often depends on the effectiveness of the diverting agents.

Keywords: diversion, reservoir, zonal coverage, carbonate, sandstone

Procedia PDF Downloads 411
10458 Hedonic Pricing Model of Parboiled Rice

Authors: Roengchai Tansuchat, Wassanai Wattanutchariya, Aree Wiboonpongse

Abstract:

Parboiled rice is one of the most important food grains and classified in cereal and cereal product. In 2015, parboiled rice was traded more than 14.34 % of total rice trade. The major parboiled rice export countries are Thailand and India, while many countries in Africa and the Middle East such as Nigeria, South Africa, United Arab Emirates, and Saudi Arabia, are parboiled rice import countries. In the global rice market, parboiled rice pricing differs from white rice pricing because parboiled rice is semi-processing product, (soaking, steaming and drying) which affects to their color and texture. Therefore, parboiled rice export pricing does not depend only on the trade volume, length of grain, and percentage of broken rice or purity but also depend on their rice seed attributes such as color, whiteness, consistency of color and whiteness, and their texture. In addition, the parboiled rice price may depend on the country of origin, and other attributes, such as certification mark, label, packaging, and sales locations. The objectives of this paper are to study the attributes of parboiled rice sold in different countries and to evaluate the relationship between parboiled rice price in different countries and their attributes by using hedonic pricing model. These results are useful for product development, and marketing strategies development. The 141 samples of parboiled rice were collected from 5 major parboiled rice consumption countries, namely Nigeria, South Africa, Saudi Arabia, United Arab Emirates and Spain. The physicochemical properties and optical properties, namely size and shape of seed, colour (L*, a*, and b*), parboiled rice texture (hardness, adhesiveness, cohesiveness, springiness, gumminess, and chewiness), nutrition (moisture, protein, carbohydrate, fat, and ash), amylose, package, country of origin, label are considered as explanatory variables. The results from parboiled rice analysis revealed that most of samples are classified as long grain and slender. The highest average whiteness value is the parboiled rice sold in South Africa. The amylose value analysis shows that most of parboiled rice is non-glutinous rice, classified in intermediate amylose content range, and the maximum value was found in United Arab Emirates. The hedonic pricing model showed that size and shape are the key factors to determine parboiled rice price statistically significant. In parts of colour, brightness value (L*) and red-green value (a*) are statistically significant, but the yellow-blue value (b*) is insignificant. In addition, the texture attributes that significantly affect to the parboiled rice price are hardness, adhesiveness, cohesiveness, and gumminess. The findings could help both parboiled rice miller, exporter and retailers formulate better production and marketing strategies by focusing on these attributes.

Keywords: hedonic pricing model, optical properties, parboiled rice, physicochemical properties

Procedia PDF Downloads 315
10457 Troubleshooting Petroleum Equipment Based on Wireless Sensors Based on Bayesian Algorithm

Authors: Vahid Bayrami Rad

Abstract:

In this research, common methods and techniques have been investigated with a focus on intelligent fault finding and monitoring systems in the oil industry. In fact, remote and intelligent control methods are considered a necessity for implementing various operations in the oil industry, but benefiting from the knowledge extracted from countless data generated with the help of data mining algorithms. It is a avoid way to speed up the operational process for monitoring and troubleshooting in today's big oil companies. Therefore, by comparing data mining algorithms and checking the efficiency and structure and how these algorithms respond in different conditions, The proposed (Bayesian) algorithm using data clustering and their analysis and data evaluation using a colored Petri net has provided an applicable and dynamic model from the point of view of reliability and response time. Therefore, by using this method, it is possible to achieve a dynamic and consistent model of the remote control system and prevent the occurrence of leakage in oil pipelines and refineries and reduce costs and human and financial errors. Statistical data The data obtained from the evaluation process shows an increase in reliability, availability and high speed compared to other previous methods in this proposed method.

Keywords: wireless sensors, petroleum equipment troubleshooting, Bayesian algorithm, colored Petri net, rapid miner, data mining-reliability

Procedia PDF Downloads 49
10456 Longitudinal Vibration of a Micro-Beam in a Micro-Scale Fluid Media

Authors: M. Ghanbari, S. Hossainpour, G. Rezazadeh

Abstract:

In this paper, longitudinal vibration of a micro-beam in micro-scale fluid media has been investigated. The proposed mathematical model for this study is made up of a micro-beam and a micro-plate at its free end. An AC voltage is applied to the pair of piezoelectric layers on the upper and lower surfaces of the micro-beam in order to actuate it longitudinally. The whole structure is bounded between two fixed plates on its upper and lower surfaces. The micro-gap between the structure and the fixed plates is filled with fluid. Fluids behave differently in micro-scale than macro, so the fluid field in the gap has been modeled based on micro-polar theory. The coupled governing equations of motion of the micro-beam and the micro-scale fluid field have been derived. Due to having non-homogenous boundary conditions, derived equations have been transformed to an enhanced form with homogenous boundary conditions. Using Galerkin-based reduced order model, the enhanced equations have been discretized over the beam and fluid domains and solve simultaneously in order to obtain force response of the micro-beam. Effects of micro-polar parameters of the fluid as characteristic length scale, coupling parameter and surface parameter on the response of the micro-beam have been studied.

Keywords: micro-polar theory, Galerkin method, MEMS, micro-fluid

Procedia PDF Downloads 167
10455 Simultaneous Targeting of MYD88 and Nur77 as an Effective Approach for the Treatment of Inflammatory Diseases

Authors: Uzma Saqib, Mirza S. Baig

Abstract:

Myeloid differentiation primary response protein 88 (MYD88) has long been considered a central player in the inflammatory pathway. Recent studies clearly suggest that it is an important therapeutic target in inflammation. On the other hand, a recent study on the interaction between the orphan nuclear receptor (Nur77) and p38α, leading to increased lipopolysaccharide-induced hyperinflammatory response, suggests this binary complex as a therapeutic target. In this study, we have designed inhibitors that can inhibit both MYD88 and Nur77 at the same time. Since both MYD88 and Nur77 are an integral part of the pathways involving lipopolysaccharide-induced activation of NF-κB-mediated inflammation, we tried to target both proteins with the same library in order to retrieve compounds having dual inhibitory properties. To perform this, we developed a homodimeric model of MYD88 and, along with the crystal structure of Nur77, screened a virtual library of compounds from the traditional Chinese medicine database containing ~61,000 compounds. We analyzed the resulting hits for their efficacy for dual binding and probed them for developing a common pharmacophore model that could be used as a prototype to screen compound libraries as well as to guide combinatorial library design to search for ideal dual-target inhibitors. Thus, our study explores the identification of novel leads having dual inhibiting effects due to binding to both MYD88 and Nur77 targets.

Keywords: drug design, Nur77, MYD88, inflammation

Procedia PDF Downloads 290
10454 Does Citizens’ Involvement Always Improve Outcomes: Procedures, Incentives and Comparative Advantages of Public and Private Law Enforcement

Authors: Avdasheva Svetlanaa, Kryuchkova Polinab

Abstract:

Comparative social efficiency of private and public enforcement of law is debated. This question is not of academic interest only, it is also important for the development of the legal system and regulations. Generally, involvement of ‘common citizens’ in public law enforcement is considered to be beneficial, while involvement of interest groups representatives is not. Institutional economics as well as law and economics consider the difference between public and private enforcement to be rather mechanical. Actions of bureaucrats in government agencies are assumed to be driven by the incentives linked to social welfare (or other indicator of public interest) and their own benefits. In contrast, actions of participants in private enforcement are driven by their private benefits. However administrative law enforcement may be designed in such a way that it would become driven mainly by individual incentives of alleged victims. We refer to this system as reactive public enforcement. Citizens may prefer using reactive public enforcement even if private enforcement is available. However replacement of public enforcement by reactive version of public enforcement negatively affects deterrence and reduces social welfare. We illustrate the problem of private vs pure public and private vs reactive public enforcement models with the examples of three legislation subsystems in Russia – labor law, consumer protection law and competition law. While development of private enforcement instead of public (especially in reactive public model) is desirable, replacement of both public and private enforcement by reactive model is definitely not.

Keywords: public enforcement, private complaints, legal errors, competition protection, labor law, competition law, russia

Procedia PDF Downloads 475
10453 Flood Prevention Strategy for Reserving Quality Ground Water Considering Future Population Growth in Kabul

Authors: Said Moqeem Sadat, Saito Takahiro, Inuzuka Norikazu, Sugiyama Ikuo

Abstract:

Kabul city is the capital of Afghanistan with a population of about 4.0 million in 2009 and 6.5 million in 2025. It is geographically located in a narrow plain valley along the Kabul River and is surrounded by high mountains. Due to its sharp geological condition, the city has been suffering from floods caused by storm water and snow melting water in the rainy season. Meanwhile, potable water resources are becoming a critical issue as the underground water table is decreasing falling rapidly due to domestic usage, industrial and agricultural activities usage especially in the dry season. This paper focuses on flood water management in Kabul including suburban agricultural area considering not only for flood protection but also: 1. To reserve the quality underground water for the future population growth. 2. To irrigate farming area in dry season using storm water ponds in rainy season. 3. To discharge city contaminated flood water to the downstream safely using existing channels/new pipes. Cost and benefit is considered in this study to find out a suitable flood protection method both in rural area and city center from a view point of 1 to 3 mentioned above. In this analysis, cost mainly consists of lost opportunity to develop lands due to flood ponds in addition to construction and maintenance one including connecting channels for water collecting/discharging. Benefit mainly consists of damage reduction of flood loss due to counter measures (this is corresponding cost) in addition to the contribution to agricultural crops. As far as reservation of the ground water for the future city growth is concerned, future demand and supply are compared in case that the pumping amount is limited by this irrigation system.

Keywords: cost-benefit, hydrological modeling, water management, water quality

Procedia PDF Downloads 256
10452 Aggregation Scheduling Algorithms in Wireless Sensor Networks

Authors: Min Kyung An

Abstract:

In Wireless Sensor Networks which consist of tiny wireless sensor nodes with limited battery power, one of the most fundamental applications is data aggregation which collects nearby environmental conditions and aggregates the data to a designated destination, called a sink node. Important issues concerning the data aggregation are time efficiency and energy consumption due to its limited energy, and therefore, the related problem, named Minimum Latency Aggregation Scheduling (MLAS), has been the focus of many researchers. Its objective is to compute the minimum latency schedule, that is, to compute a schedule with the minimum number of timeslots, such that the sink node can receive the aggregated data from all the other nodes without any collision or interference. For the problem, the two interference models, the graph model and the more realistic physical interference model known as Signal-to-Interference-Noise-Ratio (SINR), have been adopted with different power models, uniform-power and non-uniform power (with power control or without power control), and different antenna models, omni-directional antenna and directional antenna models. In this survey article, as the problem has proven to be NP-hard, we present and compare several state-of-the-art approximation algorithms in various models on the basis of latency as its performance measure.

Keywords: data aggregation, convergecast, gathering, approximation, interference, omni-directional, directional

Procedia PDF Downloads 213
10451 Case Study Analysis of 2017 European Railway Traffic Management Incident: The Application of System for Investigation of Railway Interfaces Methodology

Authors: Sanjeev Kumar Appicharla

Abstract:

This paper presents the results of the modelling and analysis of the European Railway Traffic Management (ERTMS) safety-critical incident to raise awareness of biases in the systems engineering process on the Cambrian Railway in the UK using the RAIB 17/2019 as a primary input. The RAIB, the UK independent accident investigator, published the Report- RAIB 17/2019 giving the details of their investigation of the focal event in the form of immediate cause, causal factors, and underlying factors and recommendations to prevent a repeat of the safety-critical incident on the Cambrian Line. The Systems for Investigation of Railway Interfaces (SIRI) is the methodology used to model and analyze the safety-critical incident. The SIRI methodology uses the Swiss Cheese Model to model the incident and identify latent failure conditions (potentially less than adequate conditions) by means of the management oversight and risk tree technique. The benefits of the systems for investigation of railway interfaces methodology (SIRI) are threefold: first is that it incorporates the “Heuristics and Biases” approach advanced by 2002 Nobel laureate in Economic Sciences, Prof Daniel Kahneman, in the management oversight and risk tree technique to identify systematic errors. Civil engineering and programme management railway professionals are aware of the role “optimism bias” plays in programme cost overruns and are aware of bow tie (fault and event tree) model-based safety risk modelling techniques. However, the role of systematic errors due to “Heuristics and Biases” is not appreciated as yet. This overcomes the problems of omission of human and organizational factors from accident analysis. Second, the scope of the investigation includes all levels of the socio-technical system, including government, regulatory, railway safety bodies, duty holders, signaling firms and transport planners, and front-line staff such that lessons are learned at the decision making and implementation level as well. Third, the author’s past accident case studies are supplemented with research pieces of evidence drawn from the practitioner's and academic researchers’ publications as well. This is to discuss the role of system thinking to improve the decision-making and risk management processes and practices in the IEC 15288 systems engineering standard and in the industrial context such as the GB railways and artificial intelligence (AI) contexts as well.

Keywords: accident analysis, AI algorithm internal audit, bounded rationality, Byzantine failures, heuristics and biases approach

Procedia PDF Downloads 179
10450 Coping with the Stress and Negative Emotions of Care-Giving by Using Techniques from Seneca, Epictetus, and Marcus Aurelius

Authors: Arsalan Memon

Abstract:

There are many challenges that a caregiver faces in average everyday life. One such challenge is coping with the stress and negative emotions of caregiving. The Stoics (i.e. Lucius Annaeus Seneca [4 B.C.E. - 65 C.E.], Epictetus [50-135 C.E.], and Marcus Aurelius [121-180 C.E.]) have provided coping techniques that are useful for dealing with stress and negative emotions. This paper lists and explains some of the fundamental coping techniques provided by the Stoics. For instance, some Stoic coping techniques thus follow (the list is far from exhaustive): a) mindfulness: to the best of your ability, constantly being aware of your thoughts, habits, desires, norms, memories, likes/dislikes, beliefs, values, and of everything outside of you in the world (b) constantly adjusting one’s expectations in accordance with reality, c) memento mori: constantly reminding oneself that death is inevitable and that death is not to be seen as evil, and d) praemeditatio malorum: constantly detaching oneself from everything that is so dear to one so that the least amount of suffering follows from the loss, damage, or ceasing to be of such entities. All coping techniques will be extracted from the following original texts by the Stoics: Seneca’s Letters to Lucilius, Epictetus’ Discourses and the Encheiridion, and Marcus Aurelius’ Meditations. One major finding is that the usefulness of each Stoic coping technique can be empirically tested by anyone in the sense of applying it one’s own life especially when one is facing real-life challenges. Another major finding is that all of the Stoic coping techniques are predicated upon, and follow from, one fundamental principle: constantly differentiate what is and what is not in one’s control. After differentiating it, one should constantly habituate oneself in not controlling things that are beyond one’s control. For example, the following things are beyond one’s control (all things being equal): death, certain illnesses, being born in a particular socio-economic family, etc. The conclusion is that if one habituates oneself by practicing to the best of one’s ability both the fundamental Stoic principle and the Stoic coping techniques, then such a habitual practice can eventually decrease the stress and negative emotions that one experiences by being a caregiver.

Keywords: care-giving, coping techniques, negative emotions, stoicism, stress

Procedia PDF Downloads 124
10449 Particle Filter Supported with the Neural Network for Aircraft Tracking Based on Kernel and Active Contour

Authors: Mohammad Izadkhah, Mojtaba Hoseini, Alireza Khalili Tehrani

Abstract:

In this paper we presented a new method for tracking flying targets in color video sequences based on contour and kernel. The aim of this work is to overcome the problem of losing target in changing light, large displacement, changing speed, and occlusion. The proposed method is made in three steps, estimate the target location by particle filter, segmentation target region using neural network and find the exact contours by greedy snake algorithm. In the proposed method we have used both region and contour information to create target candidate model and this model is dynamically updated during tracking. To avoid the accumulation of errors when updating, target region given to a perceptron neural network to separate the target from background. Then its output used for exact calculation of size and center of the target. Also it is used as the initial contour for the greedy snake algorithm to find the exact target's edge. The proposed algorithm has been tested on a database which contains a lot of challenges such as high speed and agility of aircrafts, background clutter, occlusions, camera movement, and so on. The experimental results show that the use of neural network increases the accuracy of tracking and segmentation.

Keywords: video tracking, particle filter, greedy snake, neural network

Procedia PDF Downloads 327
10448 Faster, Lighter, More Accurate: A Deep Learning Ensemble for Content Moderation

Authors: Arian Hosseini, Mahmudul Hasan

Abstract:

To address the increasing need for efficient and accurate content moderation, we propose an efficient and lightweight deep classification ensemble structure. Our approach is based on a combination of simple visual features, designed for high-accuracy classification of violent content with low false positives. Our ensemble architecture utilizes a set of lightweight models with narrowed-down color features, and we apply it to both images and videos. We evaluated our approach using a large dataset of explosion and blast contents and compared its performance to popular deep learning models such as ResNet-50. Our evaluation results demonstrate significant improvements in prediction accuracy, while benefiting from 7.64x faster inference and lower computation cost. While our approach is tailored to explosion detection, it can be applied to other similar content moderation and violence detection use cases as well. Based on our experiments, we propose a "think small, think many" philosophy in classification scenarios. We argue that transforming a single, large, monolithic deep model into a verification-based step model ensemble of multiple small, simple, and lightweight models with narrowed-down visual features can possibly lead to predictions with higher accuracy.

Keywords: deep classification, content moderation, ensemble learning, explosion detection, video processing

Procedia PDF Downloads 30
10447 Autonomic Sonar Sensor Fault Manager for Mobile Robots

Authors: Martin Doran, Roy Sterritt, George Wilkie

Abstract:

NASA, ESA, and NSSC space agencies have plans to put planetary rovers on Mars in 2020. For these future planetary rovers to succeed, they will heavily depend on sensors to detect obstacles. This will also become of vital importance in the future, if rovers become less dependent on commands received from earth-based control and more dependent on self-configuration and self-decision making. These planetary rovers will face harsh environments and the possibility of hardware failure is high, as seen in missions from the past. In this paper, we focus on using Autonomic principles where self-healing, self-optimization, and self-adaption are explored using the MAPE-K model and expanding this model to encapsulate the attributes such as Awareness, Analysis, and Adjustment (AAA-3). In the experimentation, a Pioneer P3-DX research robot is used to simulate a planetary rover. The sonar sensors on the P3-DX robot are used to simulate the sensors on a planetary rover (even though in reality, sonar sensors cannot operate in a vacuum). Experiments using the P3-DX robot focus on how our software system can be adapted with the loss of sonar sensor functionality. The autonomic manager system is responsible for the decision making on how to make use of remaining ‘enabled’ sonars sensors to compensate for those sonar sensors that are ‘disabled’. The key to this research is that the robot can still detect objects even with reduced sonar sensor capability.

Keywords: autonomic, self-adaption, self-healing, self-optimization

Procedia PDF Downloads 335
10446 Hygro-Thermal Modelling of Timber Decks

Authors: Stefania Fortino, Petr Hradil, Timo Avikainen

Abstract:

Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.

Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM

Procedia PDF Downloads 155