Search results for: groundwater flow and contaminant transport modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10168

Search results for: groundwater flow and contaminant transport modeling

328 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics

Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere

Abstract:

Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciences

Keywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet

Procedia PDF Downloads 137
327 Development and Total Error Concept Validation of Common Analytical Method for Quantification of All Residual Solvents Present in Amino Acids by Gas Chromatography-Head Space

Authors: A. Ramachandra Reddy, V. Murugan, Prema Kumari

Abstract:

Residual solvents in Pharmaceutical samples are monitored using gas chromatography with headspace (GC-HS). Based on current regulatory and compendial requirements, measuring the residual solvents are mandatory for all release testing of active pharmaceutical ingredients (API). Generally, isopropyl alcohol is used as the residual solvent in proline and tryptophan; methanol in cysteine monohydrate hydrochloride, glycine, methionine and serine; ethanol in glycine and lysine monohydrate; acetic acid in methionine. In order to have a single method for determining these residual solvents (isopropyl alcohol, ethanol, methanol and acetic acid) in all these 7 amino acids a sensitive and simple method was developed by using gas chromatography headspace technique with flame ionization detection. During development, no reproducibility, retention time variation and bad peak shape of acetic acid peaks were identified due to the reaction of acetic acid with the stationary phase (cyanopropyl dimethyl polysiloxane phase) of column and dissociation of acetic acid with water (if diluent) while applying temperature gradient. Therefore, dimethyl sulfoxide was used as diluent to avoid these issues. But most the methods published for acetic acid quantification by GC-HS uses derivatisation technique to protect acetic acid. As per compendia, risk-based approach was selected as appropriate to determine the degree and extent of the validation process to assure the fitness of the procedure. Therefore, Total error concept was selected to validate the analytical procedure. An accuracy profile of ±40% was selected for lower level (quantitation limit level) and for other levels ±30% with 95% confidence interval (risk profile 5%). The method was developed using DB-Waxetr column manufactured by Agilent contains 530 µm internal diameter, thickness: 2.0 µm, and length: 30 m. A constant flow of 6.0 mL/min. with constant make up mode of Helium gas was selected as a carrier gas. The present method is simple, rapid, and accurate, which is suitable for rapid analysis of isopropyl alcohol, ethanol, methanol and acetic acid in amino acids. The range of the method for isopropyl alcohol is 50ppm to 200ppm, ethanol is 50ppm to 3000ppm, methanol is 50ppm to 400ppm and acetic acid 100ppm to 400ppm, which covers the specification limits provided in European pharmacopeia. The accuracy profile and risk profile generated as part of validation were found to be satisfactory. Therefore, this method can be used for testing of residual solvents in amino acids drug substances.

Keywords: amino acid, head space, gas chromatography, total error

Procedia PDF Downloads 148
326 Analysis of Interparticle interactions in High Waxy-Heavy Clay Fine Sands for Sand Control Optimization

Authors: Gerald Gwamba

Abstract:

Formation and oil well sand production is one of the greatest and oldest concerns for the Oil and gas industry. The production of sand particles may vary from very small and limited amounts to far elevated levels which has the potential to block or plug the pore spaces near the perforated points to blocking production from surface facilities. Therefore, the timely and reliable investigation of conditions leading to the onset or quantifying sanding while producing is imperative. The challenges of sand production are even more elevated while producing in Waxy and Heavy wells with Clay Fine sands (WHFC). Existing research argues that both waxy and heavy hydrocarbons exhibit far differing characteristics with waxy more paraffinic while heavy crude oils exhibit more asphaltenic properties. Moreover, the combined effect of WHFC conditions presents more complexity in production as opposed to individual effects that could be attributed to a consolidation of a surmountable opposing force. However, research on a combined high WHFC system could depict a better representation of the surmountable effect which in essence is more comparable to field conditions where a one-sided view of either individual effects on sanding has been argued to some extent misrepresentative of actual field conditions since all factors act surmountably. In recognition of the limited customized research on sand production studies with the combined effect of WHFC however, our research seeks to apply the Design of Experiments (DOE) methodology based on latest literature to analyze the relationship between various interparticle factors in relation to selected sand control methods. Our research aims to unearth a better understanding of how the combined effect of interparticle factors including: strength, cementation, particle size and production rate among others could better assist in the design of an optimal sand control system for the WHFC well conditions. In this regard, we seek to answer the following research question: How does the combined effect of interparticle factors affect the optimization of sand control systems for WHFC wells? Results from experimental data collection will inform a better justification for a sand control design for WHFC. In doing so, we hope to contribute to earlier contrasts arguing that sand production could potentially enable well self-permeability enhancement caused by the establishment of new flow channels created by loosening and detachment of sand grains. We hope that our research will contribute to future sand control designs capable of adapting to flexible production adjustments in controlled sand management. This paper presents results which are part of an ongoing research towards the authors' PhD project in the optimization of sand control systems for WHFC wells.

Keywords: waxy-heavy oils, clay-fine sands, sand control optimization, interparticle factors, design of experiments

Procedia PDF Downloads 131
325 Direct Current Grids in Urban Planning for More Sustainable Urban Energy and Mobility

Authors: B. Casper

Abstract:

The energy transition towards renewable energies and drastically reduced carbon dioxide emissions in Germany drives multiple sectors into a transformation process. Photovoltaic and on-shore wind power are predominantly feeding in the low and medium-voltage grids. The electricity grid is not laid out to allow an increasing feed-in of power in low and medium voltage grids. Electric mobility is currently in the run-up phase in Germany and still lacks a significant amount of charging stations. The additional power demand by e-mobility cannot be supplied by the existing electric grids in most cases. The future demands in heating and cooling of commercial and residential buildings are increasingly generated by heat-pumps. Yet the most important part in the energy transition is the storage of surplus energy generated by photovoltaic and wind power sources. Water electrolysis is one way to store surplus energy known as power-to-gas. With the vehicle-to-grid technology, the upcoming fleet of electric cars could be used as energy storage to stabilize the grid. All these processes use direct current (DC). The demand of bi-directional flow and higher efficiency in the future grids can be met by using DC. The Flexible Electrical Networks (FEN) research campus at RWTH Aachen investigates interdisciplinary about the advantages, opportunities, and limitations of DC grids. This paper investigates the impact of DC grids as a technological innovation on the urban form and urban life. Applying explorative scenario development, analyzation of mapped open data sources on grid networks and research-by-design as a conceptual design method, possible starting points for a transformation to DC medium voltage grids could be found. Several fields of action have emerged in which DC technology could become a catalyst for future urban development: energy transition in urban areas, e-mobility, and transformation of the network infrastructure. The investigation shows a significant potential to increase renewable energy production within cities with DC grids. The charging infrastructure for electric vehicles will predominantly be using DC in the future because fast and ultra fast charging can only be achieved with DC. Our research shows that e-mobility, combined with autonomous driving has the potential to change the urban space and urban logistics fundamentally. Furthermore, there are possible win-win-win solutions for the municipality, the grid operator and the inhabitants: replacing overhead transmission lines by underground DC cables to open up spaces in contested urban areas can lead to a positive example of how the energy transition can contribute to a more sustainable urban structure. The outlook makes clear that target grid planning and urban planning will increasingly need to be synchronized.

Keywords: direct current, e-mobility, energy transition, grid planning, renewable energy, urban planning

Procedia PDF Downloads 128
324 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 187
323 Tip60 Histone Acetyltransferase Activators as Neuroepigenetic Therapeutic Modulators for Alzheimer’s Disease

Authors: Akanksha Bhatnagar, Sandhya Kortegare, Felice Elefant

Abstract:

Context: Alzheimer's disease (AD) is a neurodegenerative disorder that is characterized by progressive cognitive decline and memory loss. The cause of AD is not fully understood, but it is thought to be caused by a combination of genetic, environmental, and lifestyle factors. One of the hallmarks of AD is the loss of neurons in the hippocampus, a brain region that is important for memory and learning. This loss of neurons is thought to be caused by a decrease in histone acetylation, which is a process that regulates gene expression. Research Aim: The research aim of the study was to develop mall molecule compounds that can enhance the activity of Tip60, a histone acetyltransferase that is important for memory and learning. Methodology/Analysis: The researchers used in silico structural modeling and a pharmacophore-based virtual screening approach to design and synthesize small molecule compounds strongly predicted to target and enhance Tip60’s HAT activity. The compounds were then tested in vitro and in vivo to assess their ability to enhance Tip60 activity and rescue cognitive deficits in AD models. Findings: The researchers found that several of the compounds were able to enhance Tip60 activity and rescue cognitive deficits in AD models. The compounds were also developed to cross the blood-brain barrier, which is an important factor for the development of potential AD therapeutics. Theoretical Importance: The findings of this study suggest that Tip60 HAT activators have the potential to be developed as therapeutic agents for AD. The compounds are specific to Tip60, which suggests that they may have fewer side effects than other HDAC inhibitors. Additionally, the compounds are able to cross the blood-brain barrier, which is a major hurdle for the development of AD therapeutics. Data Collection: The study collected data from a variety of sources, including in vitro assays and animal models. The in vitro assays assessed the ability of compounds to enhance Tip60 activity using histone acetyltransferase (HAT) enzyme assays and chromatin immunoprecipitation assays. Animal models were used to assess the ability of the compounds to rescue cognitive deficits in AD models using a variety of behavioral tests, including locomotor ability, sensory learning, and recognition tasks. The human clinical trials will be used to assess the safety and efficacy of the compounds in humans. Questions: The question addressed by this study was whether Tip60 HAT activators could be developed as therapeutic agents for AD. Conclusions: The findings of this study suggest that Tip60 HAT activators have the potential to be developed as therapeutic agents for AD. The compounds are specific to Tip60, which suggests that they may have fewer side effects than other HDAC inhibitors. Additionally, the compounds are able to cross the blood-brain barrier, which is a major hurdle for the development of AD therapeutics. Further research is needed to confirm the safety and efficacy of these compounds in humans.

Keywords: Alzheimer's disease, cognition, neuroepigenetics, drug discovery

Procedia PDF Downloads 75
322 Influence of Laser Treatment on the Growth of Sprouts of Different Wheat Varieties

Authors: N. Bakradze, T. Dumbadze, N. Gagelidze, L. Amiranashvili, A. D. L. Batako

Abstract:

Cereals are considered as a strategic product in human life and it demand is increasing with the growth of world population. There is always shortage of cereals in various areas of the globe. For example, Georgia own production meets only 15-20% of the demand for grain, despite the fact that the country is considered one of the main centers of wheat origin. In Georgia, there are 14 types of wheat and more than 150 subspecies, and 40 subspecies of common wheat. Increasing wheat production is important for the country. One of the ways to solve the problem is to develop and implement new, environmentally and economically acceptable technologies. Such technologies include pre-sowing treatment of seed with a laser and associative nitrogen-fixing of the Azospirillum brasilensse bacteria. In the region there are Dika and Lomtagora which are among the most common in Georgia. Dika is a frost-resistant wheat, with a high ability to adapt to the environment, resistant to falling and it is sown in highlands. Dicka excellent properties are due to its strong immunity to fungal diseases; Dicka grains are rich in protein and lysine. Lomtagora 126 differs with its winter and drought resistance, and, it has a great ability to germinate. Lomtagora is characterized by a strong root system and a high budding capacity. It is an early variety, fall-resistant, easy to thresh and suitable for mechanized harvesting with large and red grains. The plant is moderately resistant to fungal diseases. This paper presents some preliminary experimental results where, a continuous CO2 laser at a power of 25-40 W/cm2 was used to radiate grains at a flow rate of 10-15 cm/sec. The treatment was carried out on grains of the Triticum aestivum L. var. of Lutescens (local variety name - Lomtagora 126), and Triticum carthlicum Nevski (local variety name - Dika). Here the grains were treated with Azospirillum brasilensse isolate (108-109 CFU / ml), which was isolated from the rhizosphere of wheat. It was observed that the germination of the wheat was not significantly influenced by either laser or bacteria treatment. In the case of the variety Lomtagora 126, when irradiated at an angle of 90°, it slightly improved the growth within 38 days of sawing, and in the case of irradiation at an angle of 90°+1, by 23%. The treatment of seeds with Azospirillum brazilense in both irradiated and non-irradiated variants led to an improvement in the growth of ssprouts. However, in the case of treatment with azospiril alone - by 22%, and with joint treatment of seeds with azospiril and irradiation - by 29%. In the case of the Dika wheat, the irradiation only led to an increase in growth by 8-9%, and the combine treatment of seeds with azospiril and irradiation - by 10-15%, in comparison with the control. Thus, the combine treatment of wheat of different varieties provided the best effect on the growth. Acknowledgment: This work was supported by Shota Rustaveli National Science Foundation of Georgia (SRNSFG) (Grant number CARYS 19-573)

Keywords: laser treatment, Azospirillum brasilensse, seeds, wheat varieties, Lomtagora, Dika

Procedia PDF Downloads 144
321 Perception of Corporate Social Responsibility and Enhancing Compassion at Work through Sense of Meaningfulness

Authors: Nikeshala Weerasekara, Roshan Ajward

Abstract:

Contemporary business environment, given the circumstance of stringent scrutiny toward corporate behavior, organizations are under pressure to develop and implement solid overarching Corporate Social Responsibility (CSR) strategies. In that milieu, in order to differentiate themselves from competitors and maintain stakeholder confidence banks spend millions of dollars on CSR programmes. However, knowledge on how non-western bank employees perceive such activities is inconclusive. At the same time recently only researchers have shifted their focus on positive effects of compassion at work or the organizational conditions under which it arises. Nevertheless, mediation mechanisms between CSR and compassion at work have not been adequately examined leaving a vacuum to be explored. Despite finding a purpose in work that is greater than extrinsic outcomes of the work is important to employees, meaningful work has not been examined adequately. Thus, in addition to examining the direct relationship between CSR and compassion at work, this study examined the mediating capability of meaningful work between these variables. Specifically, the researcher explored how CSR enables employees to sense work as meaningful which in turn would enhance their level of compassion at work. Hypotheses were developed to examine the direct relationship between CSR and compassion at work and the mediating effect of meaningful work on the relationship between CSR and compassion at work. Both Social Identity Theory (SIT) and Social Exchange Theory (SET) were used to theoretically support the relationships. The sample comprised of 450 respondents covering different levels of the bank. A convenience sampling strategy was used to secure responses from 13 local licensed commercial banks in Sri Lanka. Data was collected using a structured questionnaire which was developed based on a comprehensive review of literature and refined using both expert opinions and a pilot survey. Structural equation modeling using Smart Partial Least Square (PLS) was utilized for data analysis. Findings indicate a positive and significant (p < .05) relationship between CSR and compassion at work. Also, it was found that meaningful work partially mediates the relationship between CSR and compassion at work. As per the findings it is concluded that bank employees’ perception of CSR engagement not only directly influence compassion at work but also impact such through meaningful work as well. This implies that employees consider working for a socially responsible bank since it creates greater meaningfulness of work to retain with the organization, which in turn trigger higher level of compassion at work. By utilizing both SIT and SET in explaining relationships between CSR and compassion at work it amounts to theoretical significance of the study. Enhance existing literature on CSR and compassion at work. Also, adds insights on mediating capability of psychologically related variables such as meaningful work. This study is expected to have significant policy implications in terms of increasing compassion at work where managers must understand the importance of including CSR activities into their strategy in order to thrive. Finally, it provides evidence of suitability of using Smart PLS to test models with mediating relationships involving non normal data.

Keywords: compassion at work, corporate social responsibility, employee commitment, meaningful work, positive affect

Procedia PDF Downloads 126
320 Choking among Babies, Toddlers and Children with Special Needs: A Review of Mechanisms, Implications, Incidence, and Recommendations of Professional Prevention Guidelines

Authors: Ella Abaev, Shany Segal, Miri Gabay

Abstract:

Background: Choking is a blockage of airways that prevents efficient breathing and air flow to the lungs. Choking may be partial or full and is an emergency situation. Complete or prolonged choking leads to apnea, lack of oxygen in the tissues of the body and brain, and can cause death. There are three mechanisms of choking: obstruction of internal respiratory tracts by food or object aspiration, any material that blocks or covers external air passages, external pressure on the neck or trapping between objects. Children's airways are narrower than that of adults and therefore the risk of choking is greater, due to the aspiration of food and other foreign bodies into the lungs. In the Child Development Center at Safra Children’s Hospital, Tel Hashomer in Israel are treated infants, toddlers, and children aged 0-18 years with various developmental disabilities. Due to the increase in reports of ‘almost an event’ of choking in the past year and the serious consequences of choking event, it was decided to give an emphasis to the issue. Incidence and methods: The number of reports of ‘almost an event’ or a choking event was examined at the center during the years 2013-2018 and a thorough research work was conducted on the subject in order to build a prevention program. Findings: Between 2013 and 2018 the center reported about ten cases of ‘almost choking events’. In the middle of 2018 alone three cases of ‘almost an event’ were reported. Objective: Providing knowledge leads to awareness raise, change of perception, change in behavior and prevention. The center employs more than 130 staff members from various sectors so that it is the work of multi-professional teams to promote the quality and safety of the treatment. The familiarity of the staff with risk factors, prevention guidelines, identification of choking signs, and treatment are most important and significant in determining the outcome of a choking event. Conclusions and recommendations: After in-depth research work was carried out in cooperation with the Risk Management Unit on the subject of choking, which include a description of the definitions, mechanisms, risk factors, treatment methods and extensive recommendations for prevention (e.g. using treatment and stimulation accessories with standards association stamps and adjustment of the type of food and the way it is served to match to the child's age and the ability to swallow). The expected stages of development and emphasis on the population of children with special needs were taken into account. The research findings will be published by the staff and parents of the patients, professional publications, and lectures and there is an expectation to decrease the number of choking events in the next years.

Keywords: children with special needs, choking, educational system, prevention guidelines

Procedia PDF Downloads 179
319 The Practical Application of Sensory Awareness in Developing Healthy Communication, Emotional Regulation, and Emotional Introspection

Authors: Node Smith

Abstract:

Developmental psychology has long focused on modeling consciousness, often neglecting practical application and clinical utility. This paper aims to bridge this gap by exploring the practical application of physical and sensory tracking and awareness in fostering essential skills for conscious development. Higher conscious development requires practical skills such as self-agency, the ability to hold multiple perspectives, and genuine altruism. These are not personality characteristics but areas of skillfulness that address many cultural deficiencies impacting our world. They are intertwined with individual as well as collective conscious development. Physical, sensory tracking and awareness are crucial for developing these skills and offer the added benefit of cultivating healthy communication, emotional regulation, and introspection. Unlike skills such as throwing a baseball, which can be developed through practice or innate ability, the ability to introspect, track physical sensations, and observe oneself objectively is essential for advancing consciousness. Lacking these skills leads to cultural and individual anxiety, helplessness, and a lack of agency, manifesting as blame-shifting and irresponsibility. The inability to hold multiple perspectives stifles altruism, as genuine consideration for a global community requires accepting other perspectives without conditions. Physical and sensory tracking enhances self-awareness by grounding individuals in their bodily experiences. This grounding is critical for emotional regulation, allowing individuals to identify and process emotions in real-time, preventing overwhelm and fostering balance. Techniques like mindfulness meditation and body scan exercises attune individuals to their physical sensations, providing insights into their emotional states. Sensory awareness also facilitates healthy communication by fostering empathy and active listening. When individuals are in tune with their physical sensations, they become more present in interactions, picking up on subtle cues and responding thoughtfully. This presence reduces misunderstandings and conflicts, promoting more effective communication. The ability to introspect and observe oneself objectively is key to emotional introspection. This skill allows individuals to reflect on their thoughts, feelings, and behaviors, identify patterns, recognize areas for growth, and make conscious choices aligned with their values and goals. In conclusion, physical and sensory tracking and awareness are vital for developing the skills necessary for higher consciousness development. By fostering self-agency, emotional regulation, and the ability to hold multiple perspectives, these practices contribute to healthier communication, deeper emotional introspection, and a more altruistic and connected global community. Integrating these practices into developmental psychology and therapeutic interventions holds significant promise for both individual and societal transformation.

Keywords: conscious development, emotional introspection, emotional regulation, self-agency, stages of development

Procedia PDF Downloads 45
318 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 80
317 Traditional Rainwater Harvesting Systems: A Sustainable Solution for Non-Urban Populations in the Mediterranean

Authors: S. Fares, K. Mellakh, A. Hmouri

Abstract:

The StorMer project aims to set up a network of researchers to study traditional hydraulic rainwater harvesting systems in the Mediterranean basin, a region suffering from the major impacts of climate change and limited natural water resources. The arid and semi-arid Mediterranean basin has a long history of pioneering water management practices. The region has developed various ancient traditional water management systems, such as cisterns and qanats, to sustainably manage water resources under historical conditions of scarcity. Therefore, the StorMer project brings together Spain, France, Italy, Greece, Jordan and Morocco to explore traditional rainwater harvesting practices and systems in the Mediterranean region and to develop accurate modeling to simulate the performance and sustainability of these technologies under present-day climatic conditions. The ultimate goal of this project was to resuscitate and valorize these practices in the context of contemporary challenges. This project was intended to establish a Mediterranean network to serve as a basis for a more ambitious project. The ultimate objective was to analyze traditional hydraulic systems and create a prototype hydraulic ecosystem using a coupled environmental approach and traditional and ancient know-how, with the aim of reinterpreting them in the light of current techniques. The combination of ‘traditional’ and ‘modern knowledge/techniques’ is expected to lead to proposals for innovative hydraulic systems. The pandemic initially slowed our progress, but in the end it forced us to carry out the fieldwork in Morocco and Saudi Arabia, and so restart the project. With the participation of colleagues from chronologically distant fields (archaeology, sociology), we are now prepared to share our observations and propose the next steps. This interdisciplinary approach should give us a global vision of the project's objectives and challenges. A diachronic approach is needed to tackle the question of the long-term adaptation of societies in a Mediterranean context that has experienced several periods of water stress. The next stage of the StorMer project is the implementation of pilots in non-urbanized regions. These pilots will test the implementation of traditional systems and will be maintained and evaluated in terms of effectiveness, cost and acceptance. Based on these experiences, larger projects will be proposed and could provide information for regional water management policies. One of the most important lessons learned from this project is the highly social nature of managing traditional rainwater harvesting systems. Unlike modern, centralized water infrastructures, these systems often require the involvement of communities, which assume ownership and responsibility for them. This kind of community engagement leads to greater maintenance and, therefore, sustainability of the systems. Knowledge of the socio-cultural characteristics of these communities means that the systems can be adapted to the needs of each location, ensuring greater acceptance and efficiency.

Keywords: oasis, rainfall harvesting, arid regions, Mediterranean

Procedia PDF Downloads 40
316 Assessing Moisture Adequacy over Semi-arid and Arid Indian Agricultural Farms using High-Resolution Thermography

Authors: Devansh Desai, Rahul Nigam

Abstract:

Crop water stress (W) at a given growth stage starts to set in as moisture availability (M) to roots falls below 75% of maximum. It has been found that ratio of crop evapotranspiration (ET) and reference evapotranspiration (ET0) is an indicator of moisture adequacy and is strongly correlated with ‘M’ and ‘W’. The spatial variability of ET0 is generally less over an agricultural farm of 1-5 ha than ET, which depends on both surface and atmospheric conditions, while the former depends only on atmospheric conditions. Solutions from surface energy balance (SEB) and thermal infrared (TIR) remote sensing are now known to estimate latent heat flux of ET. In the present study, ET and moisture adequacy index (MAI) (=ET/ET0) have been estimated over two contrasting western India agricultural farms having rice-wheat system in semi-arid climate and arid grassland system, limited by moisture availability. High-resolution multi-band TIR sensing observations at 65m from ECOSTRESS (ECOsystemSpaceborne Thermal Radiometer Experiment on Space Station) instrument on-board International Space Station (ISS) were used in an analytical SEB model, STIC (Surface Temperature Initiated Closure) to estimate ET and MAI. The ancillary variables used in the ET modeling and MAI estimation were land surface albedo, NDVI from close-by LANDSAT data at 30m spatial resolution, ET0 product at 4km spatial resolution from INSAT 3D, meteorological forcing variables from short-range weather forecast on air temperature and relative humidity from NWP model. Farm-scale ET estimates at 65m spatial resolution were found to show low RMSE of 16.6% to 17.5% with R2 >0.8 from 18 datasets as compared to reported errors (25 – 30%) from coarser-scale ET at 1 to 8 km spatial resolution when compared to in situ measurements from eddy covariance systems. The MAI was found to show lower (<0.25) and higher (>0.5) magnitudes in the contrasting agricultural farms. The study showed the potential need of high-resolution high-repeat spaceborne multi-band TIR payloads alongwith optical payload in estimating farm-scale ET and MAI for estimating consumptive water use and water stress. A set of future high-resolution multi-band TIR sensors are planned on-board Indo-French TRISHNA, ESA’s LSTM, NASA’s SBG space-borne missions to address sustainable irrigation water management at farm-scale to improve crop water productivity. These will provide precise and fundamental variables of surface energy balance such as LST (Land Surface Temperature), surface emissivity, albedo and NDVI. A synchronization among these missions is needed in terms of observations, algorithms, product definitions, calibration-validation experiments and downstream applications to maximize the potential benefits.

Keywords: thermal remote sensing, land surface temperature, crop water stress, evapotranspiration

Procedia PDF Downloads 70
315 Reactors with Effective Mixing as a Solutions for Micro-Biogas Plant

Authors: M. Zielinski, M. Debowski, P. Rusanowska, A. Glowacka-Gil, M. Zielinska, A. Cydzik-Kwiatkowska, J. Kazimierowicz

Abstract:

Technologies for the micro-biogas plant with heating and mixing systems are presented as a part of the Research Coordination for a Low-Cost Biomethane Production at Small and Medium Scale Applications (Record Biomap). The main objective of the Record Biomap project is to build a network of operators and scientific institutions interested in cooperation and the development of promising technologies in the sector of small and medium-sized biogas plants. The activities carried out in the project will bridge the gap between research and market and reduce the time of implementation of new, efficient technological and technical solutions. Reactor with simultaneously mixing and heating system is a concrete tank with a rectangular cross-section. In the reactor, heating is integrated with the mixing of substrate and anaerobic sludge. This reactor is solution dedicated for substrates with high solids content, which cannot be introduced to the reactor with pumps, even with positive displacement pumps. Substrates are poured to the reactor and then with a screw pump, they are mixed with anaerobic sludge. The pumped sludge, flowing through the screw pump, is simultaneously heated by a heat exchanger. The level of the fermentation sludge inside the reactor chamber is above the bottom edge of the cover. Cover of the reactor is equipped with the screw pump driver. Inside the reactor, an electric motor is installed that is driving a screw pump. The heated sludge circulates in the digester. The post-fermented sludge is collected using a drain well. The inlet to the drain well is below the level of the sludge in the digester. The biogas is discharged from the reactor by the biogas intake valve located on the cover. The technology is very useful for fermentation of lignocellulosic biomass and substrates with high content of dry mass (organic wastes). The other technology is a reactor for micro-biogas plant with a pressure mixing system. The reactor has a form of plastic or concrete tank with a circular cross-section. The effective mixing of sludge is ensured by profiled at 90° bottom of the tank. Substrates for fermentation are supplied by an inlet well. The inlet well is equipped with a cover that eliminates odour release. The introduction of a new portion of substrates is preceded by pumping of digestate to the disposal well. Optionally, digestate can gravitationally flow to digestate storage tank. The obtained biogas is discharged into the separator. The valve supplies biogas to the blower. The blower presses the biogas from the fermentation chamber in such a way as to facilitate the introduction of a new portion of substrates. Biogas is discharged from the reactor by valve that enables biogas removal but prevents suction from outside the reactor.

Keywords: biogas, digestion, heating system, mixing system

Procedia PDF Downloads 154
314 Application of Acoustic Emissions Related to Drought Can Elicit Antioxidant Responses and Capsaicinoids Content in Chili Pepper Plants

Authors: Laura Helena Caicedo Lopez, Luis Miguel Contreras Medina, Ramon Gerardo Guevara Gonzales, Juan E. Andrade

Abstract:

In this study, we evaluated the effect of three different hydric stress conditions: Low (LHS), medium (MHS), and high (HHS) on capsaicinoid content and enzyme regulation of C. annuum plants. Five main peaks were detected using a 2 Hz resolution vibrometer laser (Polytec-B&K). These peaks or “characteristic frequencies” were used as acoustic emissions (AEs) treatment, transforming these signals into audible sound with the frequency (Hz) content of each hydric stress. Capsaicinoids (CAPs) are the main, secondary metabolites of chili pepper plants and are known to increase during hydric stress conditions or short drought-periods. The AEs treatments were applied in two plant stages: the first one was in the pre-anthesis stage to evaluate the genes that encode the transcription of enzymes responsible for diverse metabolic activities of C. annuum plants. For example, the antioxidant responses such as peroxidase (POD), superoxide dismutase (Mn-SOD). Also, phenyl-alanine ammonia-lyase (PAL) involved in the biosynthesis of the phenylpropanoid compounds. The chalcone synthase (CHS) related to the natural defense mechanisms and species-specific aquaporin (CAPIP-1) that regulate the flow of water into and out of cells. The second stage was at 40 days after flowering (DAF) to evaluate the biochemical effect of AEs related to hydric stress on capsaicinoids production. These two experiments were conducted to identify the molecular responses of C. annuum plants to AE. Moreover, to define AEs could elicit any increase in the capsaicinoids content after a one-week exposition to AEs treatments. The results show that all AEs treatment signals (LHS, MHS, and HHS) were significantly different compared to the non-acoustic emission control (NAE). Also, the AEs induced the up-regulation of POD (~2.8, 2.9, and 3.6, respectively). The gene expression of another antioxidant response was particularly treatment-dependent. The HHS induced and overexpression of Mn-SOD (~0.23) and PAL (~0.33). As well, the MHS only induced an up-regulation of the CHs gene (~0.63). On the other hand, CAPIP-1 gene gas down-regulated by all AEs treatments LHS, MHS, and HHS ~ (-2.4, -0.43 and -6.4, respectively). Likewise, the down-regulation showed particularities depending on the treatment. LHS and MHS induced downregulation of the SOD gene ~ (-1.26 and -1.20 respectively) and PAL (-4.36 and 2.05, respectively). Correspondingly, the LHS and HHS showed the same tendency in the CHs gene, respectively ~ (-1.12 and -1.02, respectively). Regarding the elicitation effect of AE on the capsaicinoids content, additional treatment controls were included. A white noise treatment (WN) to prove the frequency-selectiveness of signals and a hydric stressed group (HS) to compare the CAPs content. Our findings suggest that WN and NAE did not present differences statically. Conversely, HS and all AEs treatments induced a significant increase of capsaicin (Cap) and dihydrocapsaicin (Dcap) after one-week of a treatment. Specifically, the HS plants showed an increase of 8.33 times compared to the NAE and WN treatments and 1.4 times higher than the MHS, which was the AEs treatment with a larger induction of Capsaicinoids among treatments (5.88) and compared to the controls.

Keywords: acoustic emission, capsaicinoids, elicitors, hydric stress, plant signaling

Procedia PDF Downloads 171
313 Investigation of Residual Stress Relief by in-situ Rolling Deposited Bead in Directed Laser Deposition

Authors: Ravi Raj, Louis Chiu, Deepak Marla, Aijun Huang

Abstract:

Hybridization of the directed laser deposition (DLD) process using an in-situ micro-roller to impart a vertical compressive load on the deposited bead at elevated temperatures can relieve tensile residual stresses incurred in the process. To investigate this stress relief mechanism and its relationship with the in-situ rolling parameters, a fully coupled dynamic thermo-mechanical model is presented in this study. A single bead deposition of Ti-6Al-4V alloy with an in-situ roller made of mild steel moving at a constant speed with a fixed nominal bead reduction is simulated using the explicit solver of the finite element software, Abaqus. The thermal model includes laser heating during the deposition process and the heat transfer between the roller and the deposited bead. The laser heating is modeled using a moving heat source with a Gaussian distribution, applied along the pre-formed bead’s surface using the VDFLUX Fortran subroutine. The bead’s cross-section is assumed to be semi-elliptical. The interfacial heat transfer between the roller and the bead is considered in the model. Besides, the roller is cooled internally using axial water flow, considered in the model using convective heat transfer. The mechanical model for the bead and substrate includes the effects of rolling along with the deposition process, and their elastoplastic material behavior is captured using the J2 plasticity theory. The model accounts for strain, strain rate, and temperature effects on the yield stress based on Johnson-Cook’s theory. Various aspects of this material behavior are captured in the FE software using the subroutines -VUMAT for elastoplastic behavior, VUHARD for yield stress, and VUEXPAN for thermal strain. The roller is assumed to be elastic and does not undergo any plastic deformation. Also, contact friction at the roller-bead interface is considered in the model. Based on the thermal results of the bead, the distance between the roller and the deposition nozzle (roller o set) can be determined to ensure rolling occurs around the beta-transus temperature for the Ti-6Al-4V alloy. It is identified that roller offset and the nominal bead height reduction are crucial parameters that influence the residual stresses in the hybrid process. The results obtained from a simulation at roller offset of 20 mm and nominal bead height reduction of 7% reveal that the tensile residual stresses decrease to about 52% due to in-situ rolling throughout the deposited bead. This model can be used to optimize the rolling parameters to minimize the residual stresses in the hybrid DLD process with in-situ micro-rolling.

Keywords: directed laser deposition, finite element analysis, hybrid in-situ rolling, thermo-mechanical model

Procedia PDF Downloads 109
312 Tracing a Timber Breakthrough: A Qualitative Study of the Introduction of Cross-Laminated-Timber to the Student Housing Market in Norway

Authors: Marius Nygaard, Ona Flindall

Abstract:

The Palisaden student housing project was completed in August 2013 and was, with its eight floors, Norway’s tallest timber building at the time of completion. It was the first time cross-laminated-timber (CLT) was utilized at this scale in Norway. The project was the result of a concerted effort by a newly formed management company to establish CLT as a sustainable and financially competitive alternative to conventional steel and concrete systems. The introduction of CLT onto the student housing market proved so successful that by 2017 more than 4000 individual student residences will have been built using the same model of development and construction. The aim of this paper is to identify the key factors that enabled this breakthrough for CLT. It is based on an in-depth study of a series of housing projects and the role of the management company who both instigated and enabled this shift of CLT from the margin to the mainstream. Specifically, it will look at how a new building system was integrated into a marketing strategy that identified a market potential within the existing structure of the construction industry and within the economic restrictions inherent to student housing in Norway. It will show how a key player established a project model that changed both the patterns of cooperation and the information basis for decisions. Based on qualitative semi-structured interviews with managers, contractors and the interdisciplinary teams of consultants (architects, structural engineers, acoustical experts etc.) this paper will trace the introduction, expansion and evolution of CLT-based building systems in the student housing market. It will show how the project management firm’s position in the value chain enabled them to function both as a liaison between contractor and client, and between contractor and producer. A position that allowed them to improve the flow of information. This ensured that CLT was handled on equal terms to other structural solutions in the project specifications, enabling realistic pricing and risk evaluation. Secondly, this paper will describe and discuss how the project management firm established and interacted with a growing network of contractors, architects and engineers to pool expertise and broaden the knowledge base across Norway’s regional markets. Finally, it will examine the role of the client, the building typology, and the industrial and technological factors in achieving this breakthrough for CLT in the construction industry. This paper gives an in-depth view of the progression of a single case rather than a broad description of the state of the art of large-scale timber building in Norway. However, this type of study may offer insights that are important to the understanding not only of specific markets but also of how new technologies should be introduced in big and well-established industries.

Keywords: cross-laminated-timber (CLT), industry breakthrough, student housing, timber market

Procedia PDF Downloads 223
311 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 271
310 Enhanced Recoverable Oil in Northern Afghanistan Kashkari Oil Field by Low-Salinity Water Flooding

Authors: Zabihullah Mahdi, Khwaja Naweed Seddiqi

Abstract:

Afghanistan is located in a tectonically complex and dynamic area, surrounded by rocks that originated on the mother continent of Gondwanaland. The northern Afghanistan basin, which runs along the country's northern border, has the potential for petroleum generation and accumulation. The Amu Darya basin has the largest petroleum potential in the region. Sedimentation occurred in the Amu Darya basin from the Jurassic to the Eocene epochs. Kashkari oil field is located in northern Afghanistan's Amu Darya basin. The field structure consists of a narrow northeast-southwest (NE-SW) anticline with two structural highs, the northwest limb being mild and the southeast limb being steep. The first oil production well in the Kashkari oil field was drilled in 1976, and a total of ten wells were drilled in the area between 1976 and 1979. The amount of original oil in place (OOIP) in the Kashkari oil field, based on the results of surveys and calculations conducted by research institutions, is estimated to be around 140 MMbbls. The objective of this study is to increase recoverable oil reserves in the Kashkari oil field through the implementation of low-salinity water flooding (LSWF) enhanced oil recovery (EOR) technique. The LSWF involved conducting a core flooding laboratory test consisting of four sequential steps with varying salinities. The test commenced with the use of formation water (FW) as the initial salinity, which was subsequently reduced to a salinity level of 0.1%. Afterwards, the numerical simulation model of core scale oil recovery by LSWF was designed by Computer Modelling Group’s General Equation Modeler (CMG-GEM) software to evaluate the applicability of the technology to the field scale. Next, the Kahskari oil field simulation model was designed, and the LSWF method was applied to it. To obtain reasonable results, laboratory settings (temperature, pressure, rock, and oil characteristics) are designed as far as possible based on the condition of the Kashkari oil field, and several injection and production patterns are investigated. The relative permeability of oil and water in this study was obtained using Corey’s equation. In the Kashkari oilfield simulation model, three models: 1. Base model (with no water injection), 2. FW injection model, and 3. The LSW injection model were considered for the evaluation of the LSWF effect on oil recovery. Based on the results of the LSWF laboratory experiment and computer simulation analysis, the oil recovery increased rapidly after the FW was injected into the core. Subsequently, by injecting 1% salinity water, a gradual increase of 4% oil can be observed. About 6.4% of the field, is produced by the application of the LSWF technique. The results of LSWF (salinity 0.1%) on the Kashkari oil field suggest that this technology can be a successful method for developing Kashkari oil production.

Keywords: low salinity water flooding, immiscible displacement, kashkari oil field, twophase flow, numerical reservoir simulation model

Procedia PDF Downloads 42
309 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin

Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa

Abstract:

Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.

Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®

Procedia PDF Downloads 131
308 Multiscale Modelization of Multilayered Bi-Dimensional Soils

Authors: I. Hosni, L. Bennaceur Farah, N. Saber, R Bennaceur

Abstract:

Soil moisture content is a key variable in many environmental sciences. Even though it represents a small proportion of the liquid freshwater on Earth, it modulates interactions between the land surface and the atmosphere, thereby influencing climate and weather. Accurate modeling of the above processes depends on the ability to provide a proper spatial characterization of soil moisture. The measurement of soil moisture content allows assessment of soil water resources in the field of hydrology and agronomy. The second parameter in interaction with the radar signal is the geometric structure of the soil. Most traditional electromagnetic models consider natural surfaces as single scale zero mean stationary Gaussian random processes. Roughness behavior is characterized by statistical parameters like the Root Mean Square (RMS) height and the correlation length. Then, the main problem is that the agreement between experimental measurements and theoretical values is usually poor due to the large variability of the correlation function, and as a consequence, backscattering models have often failed to predict correctly backscattering. In this study, surfaces are considered as band-limited fractal random processes corresponding to a superposition of a finite number of one-dimensional Gaussian process each one having a spatial scale. Multiscale roughness is characterized by two parameters, the first one is proportional to the RMS height, and the other one is related to the fractal dimension. Soil moisture is related to the complex dielectric constant. This multiscale description has been adapted to two-dimensional profiles using the bi-dimensional wavelet transform and the Mallat algorithm to describe more correctly natural surfaces. We characterize the soil surfaces and sub-surfaces by a three layers geo-electrical model. The upper layer is described by its dielectric constant, thickness, a multiscale bi-dimensional surface roughness model by using the wavelet transform and the Mallat algorithm, and volume scattering parameters. The lower layer is divided into three fictive layers separated by an assumed plane interface. These three layers were modeled by an effective medium characterized by an apparent effective dielectric constant taking into account the presence of air pockets in the soil. We have adopted the 2D multiscale three layers small perturbations model including, firstly air pockets in the soil sub-structure, and then a vegetable canopy in the soil surface structure, that is to simulate the radar backscattering. A sensitivity analysis of backscattering coefficient dependence on multiscale roughness and new soil moisture has been performed. Later, we proposed to change the dielectric constant of the multilayer medium because it takes into account the different moisture values of each layer in the soil. A sensitivity analysis of the backscattering coefficient, including the air pockets in the volume structure with respect to the multiscale roughness parameters and the apparent dielectric constant, was carried out. Finally, we proposed to study the behavior of the backscattering coefficient of the radar on a soil having a vegetable layer in its surface structure.

Keywords: multiscale, bidimensional, wavelets, backscattering, multilayer, SPM, air pockets

Procedia PDF Downloads 125
307 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 191
306 Role of Lipid-Lowering Treatment in the Monocyte Phenotype and Chemokine Receptor Levels after Acute Myocardial Infarction

Authors: Carolina N. França, Jônatas B. do Amaral, Maria C.O. Izar, Ighor L. Teixeira, Francisco A. Fonseca

Abstract:

Introduction: Atherosclerosis is a progressive disease, characterized by lipid and fibrotic element deposition in large-caliber arteries. Conditions related to the development of atherosclerosis, as dyslipidemia, hypertension, diabetes, and smoking are associated with endothelial dysfunction. There is a frequent recurrence of cardiovascular outcomes after acute myocardial infarction and, at this sense, cycles of mobilization of monocyte subtypes (classical, intermediate and nonclassical) secondary to myocardial infarction may determine the colonization of atherosclerotic plaques in different stages of the development, contributing to early recurrence of ischemic events. The recruitment of different monocyte subsets during inflammatory process requires the expression of chemokine receptors CCR2, CCR5, and CX3CR1, to promote the migration of monocytes to the inflammatory site. The aim of this study was to evaluate the effect of lipid-lowering treatment by six months in the monocyte phenotype and chemokine receptor levels of patients after Acute Myocardial Infarction (AMI). Methods: This is a PROBE (prospective, randomized, open-label trial with blinded endpoints) study (ClinicalTrials.gov Identifier: NCT02428374). Adult patients (n=147) of both genders, ageing 18-75 years, were randomized in a 2x2 factorial design for treatment with rosuvastatin 20 mg/day or simvastatin 40 mg/day plus ezetimibe 10 mg/day as well as ticagrelor 90 mg 2x/day and clopidogrel 75 mg, in addition to conventional AMI therapy. Blood samples were collected at baseline, after one month and six months of treatment. Monocyte subtypes (classical - inflammatory, intermediate - phagocytic and nonclassical – anti-inflammatory) were identified, quantified and characterized by flow cytometry, as well as the expressions of the chemokine receptors (CCR2, CCR5 and CX3CR1) were also evaluated in the mononuclear cells. Results: After six months of treatment, there was an increase in the percentage of classical monocytes and reduction in the nonclassical monocytes (p=0.038 and p < 0.0001 Friedman Test), without differences for intermediate monocytes. Besides, classical monocytes had higher expressions of CCR5 and CX3CR1 after treatment, without differences related to CCR2 (p < 0.0001 for CCR5 and CX3CR1; p=0.175 for CCR2). Intermediate monocytes had higher expressions of CCR5 and CX3CR1 and lower expression of CCR2 (p = 0.003; p < 0.0001 and p = 0.011, respectively). Nonclassical monocytes had lower expressions of CCR2 and CCR5, without differences for CX3CR1 (p < 0.0001; p = 0.009 and p = 0.138, respectively). There were no differences after the comparison between the four treatment arms. Conclusion: The data suggest a time-dependent modulation of classical and nonclassical monocytes and chemokine receptor levels. The higher percentage of classical monocytes (inflammatory cells) suggest a residual inflammatory risk, even under preconized treatments to AMI. Indeed, these changes do not seem to be affected by choice of the lipid-lowering strategy.

Keywords: acute myocardial infarction, chemokine receptors, lipid-lowering treatment, monocyte subtypes

Procedia PDF Downloads 119
305 The Effect of Artificial Intelligence on Mobile Phones and Communication Systems

Authors: Ibram Khalafalla Roshdy Shokry

Abstract:

This paper gives service feel multiple get entry to (CSMA) verbal exchange model based totally totally on SoC format method. Such model can be used to guide the modelling of the complex c084d04ddacadd4b971ae3d98fecfb2a communique systems, consequently use of such communication version is an crucial method in the creation of excessive general overall performance conversation. SystemC has been selected as it gives a homogeneous format drift for complicated designs (i.e. SoC and IP based format). We use a swarm device to validate CSMA designed version and to expose how advantages of incorporating communication early within the layout process. The wireless conversation created via the modeling of CSMA protocol that may be used to attain conversation among all of the retailers and to coordinate get proper of entry to to the shared medium (channel).The device of automobiles with wi-fiwireless communique abilities is expected to be the important thing to the evolution to next era intelligent transportation systems (ITS). The IEEE network has been continuously operating at the development of an wireless vehicular communication protocol for the enhancement of wi-fi get admission to in Vehicular surroundings (WAVE). Vehicular verbal exchange systems, known as V2X, help car to car (V2V) and automobile to infrastructure (V2I) communications. The wi-ficiencywireless of such communication systems relies upon on several elements, amongst which the encircling surroundings and mobility are prominent. as a result, this observe makes a speciality of the evaluation of the actual performance of vehicular verbal exchange with unique cognizance on the effects of the actual surroundings and mobility on V2X verbal exchange. It begins by wi-fi the actual most range that such conversation can guide and then evaluates V2I and V2V performances. The Arada LocoMate OBU transmission device changed into used to check and evaluate the effect of the transmission range in V2X verbal exchange. The evaluation of V2I and V2V communique takes the real effects of low and excessive mobility on transmission under consideration.Multiagent systems have received sizeable attention in numerous wi-fields, which include robotics, independent automobiles, and allotted computing, where a couple of retailers cooperate and speak to reap complicated duties. wi-figreen communication among retailers is a critical thing of these systems, because it directly influences their usual performance and scalability. This scholarly work gives an exploration of essential communication factors and conducts a comparative assessment of diverse protocols utilized in multiagent systems. The emphasis lies in scrutinizing the strengths, weaknesses, and applicability of those protocols across diverse situations. The studies additionally sheds light on rising tendencies within verbal exchange protocols for multiagent systems, together with the incorporation of device mastering strategies and the adoption of blockchain-based totally solutions to make sure comfy communique. those developments offer valuable insights into the evolving landscape of multiagent structures and their verbal exchange protocols.

Keywords: communication, multi-agent systems, protocols, consensussystemC, modelling, simulation, CSMA

Procedia PDF Downloads 25
304 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays

Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín

Abstract:

Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.

Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation

Procedia PDF Downloads 195
303 Increasing Recoverable Oil in Northern Afghanistan Kashkari Oil Field by Low-Salinity Water Flooding

Authors: Zabihullah Mahdi, Khwaja Naweed Seddiqi

Abstract:

Afghanistan is located in a tectonically complex and dynamic area, surrounded by rocks that originated on the mother continent of Gondwanaland. The northern Afghanistan basin, which runs along the country's northern border, has the potential for petroleum generation and accumulation. The Amu Darya basin has the largest petroleum potential in the region. Sedimentation occurred in the Amu Darya basin from the Jurassic to the Eocene epochs. Kashkari oil field is located in northern Afghanistan's Amu Darya basin. The field structure consists of a narrow northeast-southwest (NE-SW) anticline with two structural highs, the northwest limb being mild and the southeast limb being steep. The first oil production well in the Kashkari oil field was drilled in 1976, and a total of ten wells were drilled in the area between 1976 and 1979. The amount of original oil in place (OOIP) in the Kashkari oil field, based on the results of surveys and calculations conducted by research institutions, is estimated to be around 140 MMbbls. The objective of this study is to increase recoverable oil reserves in the Kashkari oil field through the implementation of low-salinity water flooding (LSWF) enhanced oil recovery (EOR) technique. The LSWF involved conducting a core flooding laboratory test consisting of four sequential steps with varying salinities. The test commenced with the use of formation water (FW) as the initial salinity, which was subsequently reduced to a salinity level of 0.1%. Afterward, the numerical simulation model of core scale oil recovery by LSWF was designed by Computer Modelling Group’s General Equation Modeler (CMG-GEM) software to evaluate the applicability of the technology to the field scale. Next, the Kahskari oil field simulation model was designed, and the LSWF method was applied to it. To obtain reasonable results, laboratory settings (temperature, pressure, rock, and oil characteristics) are designed as far as possible based on the condition of the Kashkari oil field, and several injection and production patterns are investigated. The relative permeability of oil and water in this study was obtained using Corey’s equation. In the Kashkari oilfield simulation model, three models: 1. Base model (with no water injection), 2. FW injection model, and 3. The LSW injection model was considered for the evaluation of the LSWF effect on oil recovery. Based on the results of the LSWF laboratory experiment and computer simulation analysis, the oil recovery increased rapidly after the FW was injected into the core. Subsequently, by injecting 1% salinity water, a gradual increase of 4% oil can be observed. About 6.4% of the field is produced by the application of the LSWF technique. The results of LSWF (salinity 0.1%) on the Kashkari oil field suggest that this technology can be a successful method for developing Kashkari oil production.

Keywords: low-salinity water flooding, immiscible displacement, Kashkari oil field, two-phase flow, numerical reservoir simulation model

Procedia PDF Downloads 39
302 Computational and Experimental Study of the Mechanics of Heart Tube Formation in the Chick Embryo

Authors: Hadi S. Hosseini, Larry A. Taber

Abstract:

In the embryo, heart is initially a simple tubular structure that undergoes complex morphological changes as it transforms into a four-chambered pump. This work focuses on mechanisms that create heart tube (HT). The early embryo is composed of three relatively flat primary germ layers called endoderm, mesoderm, and ectoderm. Precardiac cells located within bilateral regions of the mesoderm called heart fields (HFs) fold and fuse along the embryonic midline to create the HT. The right and left halves of this plate fold symmetrically to bring their upper edges into contact along the midline, where they fuse. In a region near the fusion line, these layers then separate to generate the primitive HT and foregut, which then extend vertically. The anterior intestinal portal (AIP) is the opening at the caudal end of the foregut, which descends as the HT lengthens. The biomechanical mechanisms that drive this folding are poorly understood. Our central hypothesis is that folding is caused by differences in growth between the endoderm and mesoderm while subsequent extension is driven by contraction along the AIP. The feasibility of this hypothesis is examined using experiments with chick embryos and finite-element modeling (FEM). Fertilized white Leghorn chicken eggs were incubated for approximately 22-33 hours until appropriate Hamburger and Hamilton stage (HH5 to HH9) was reached. To inhibit contraction, embryos were cultured in media containing blebbistatin (myosin II inhibitor) for 18h. Three-dimensional models were created using ABAQUS (D. S. Simulia). The initial geometry consists of a flat plate including two layers representing the mesoderm and endoderm. Tissue was considered as a nonlinear elastic material with growth and contraction (negative growth) simulated using a theory, in which the total deformation gradient is given by F=F^*.G, where G is growth tensor and F* is the elastic deformation gradient tensor. In embryos exposed to blebbistatin, initial folding and AIP descension occurred normally. However, after HFs partially fused to create the upper part of the HT, fusion, and AIP descension stopped, and the HT failed to grow longer. These results suggest that cytoskeletal contraction is required only for the later stages of HT formation. In the model, a larger biaxial growth rate in the mesoderm compared to the endoderm causes the bilayered plate to bend ventrally, as the upper edge moves toward the midline, where it 'fuses' with the other half . This folding creates the upper section of the HT, as well as the foregut pocket bordered by the AIP. After this phase completes by stage HH7, contraction along the arch-shaped AIP pulls the lower edge of the plate downward, stretching the two layers. Results given by model are in reasonable agreement with experimental data for the shape of HT, as well as patterns of stress and strain. In conclusion, results of our study support our hypothesis for the creation of the heart tube.

Keywords: heart tube formation, FEM, chick embryo, biomechanics

Procedia PDF Downloads 296
301 Melt–Electrospun Polyprophylene Fabrics Functionalized with TiO2 Nanoparticles for Effective Photocatalytic Decolorization

Authors: Z. Karahaliloğlu, C. Hacker, M. Demirbilek, G. Seide, E. B. Denkbaş, T. Gries

Abstract:

Currently, textile industry has played an important role in world’s economy, especially in developing countries. Dyes and pigments used in textile industry are significant pollutants. Most of theirs are azo dyes that have chromophore (-N=N-) in their structure. There are many methods for removal of the dyes from wastewater such as chemical coagulation, flocculation, precipitation and ozonation. But these methods have numerous disadvantages and alternative methods are needed for wastewater decolorization. Titanium-mediated photodegradation has been used generally due to non-toxic, insoluble, inexpensive, and highly reactive properties of titanium dioxide semiconductor (TiO2). Melt electrospinning is an attractive manufacturing process for thin fiber production through electrospinning from PP (Polyprophylene). PP fibers have been widely used in the filtration due to theirs unique properties such as hydrophobicity, good mechanical strength, chemical resistance and low-cost production. In this study, we aimed to investigate the effect of titanium nanoparticle localization and amine modification on the dye degradation. The applicability of the prepared chemical activated composite and pristine fabrics for a novel treatment of dyeing wastewater were evaluated.In this study, a photocatalyzer material was prepared from nTi (titanium dioxide nanoparticles) and PP by a melt-electrospinning technique. The electrospinning parameters of pristine PP and PP/nTi nanocomposite fabrics were optimized. Before functionalization with nTi, the surface of fabrics was activated by a technique using glutaraldehyde (GA) and polyethyleneimine to promote the dye degredation. Pristine PP and PP/nTi nanocomposite melt-electrospun fabrics were characterized using scanning electron microscopy (SEM) and X-Ray Photon Spectroscopy (XPS). Methyl orange (MO) was used as a model compound for the decolorization experiments. Photocatalytic performance of nTi-loaded pristine and nanocomposite melt-electrospun filters was investigated by varying initial dye concentration 10, 20, 40 mg/L). nTi-PP composite fabrics were successfully processed into a uniform, fibrous network of beadless fibers with diameters of 800±0.4 nm. The process parameters were determined as a voltage of 30 kV, a working distance of 5 cm, a temperature of the thermocouple and hotcoil of 260–300 ºC and a flow rate of 0.07 mL/h. SEM results indicated that TiO2 nanoparticles were deposited uniformly on the nanofibers and XPS results confirmed the presence of titanium nanoparticles and generation of amine groups after modification. According to photocatalytic decolarization test results, nTi-loaded GA-treated pristine or nTi-PP nanocomposite fabric filtern have superior properties, especially over 90% decolorization efficiency at GA-treated pristine and nTi-PP composite PP fabrics. In this work, as a photocatalyzer for wastewater treatment, surface functionalized with nTi melt-electrospun fabrics from PP were prepared. Results showed melt-electrospun nTi-loaded GA-tretaed composite or pristine PP fabrics have a great potential for use as a photocatalytic filter to decolorization of wastewater and thus, requires further investigation.

Keywords: titanium oxide nanoparticles, polyprophylene, melt-electrospinning

Procedia PDF Downloads 267
300 Solid Waste and Its Impact on the Human Health

Authors: Waseem Akram, Hafiz Azhar Ali Khan

Abstract:

Unplanned urbanization together with change in life from simple to more technologically advanced style with flow of rural masses to urban areas has played a vital role in pilling loads of solid wastes in our environment. The cities and towns have expanded beyond boundaries. Even the uncontrolled population expansion has caused the overall environmental burden. Thus, today the indifference remains as one of the biggest trash that has come up due to the non-responsive behavior of the people. Everyday huge amount of solid waste is thrown in the streets, on the roads, parks, and in all those places that are frequently and often visited by the human beings. This behavior based response in many countries of the world has led to serious health concerns and environmental issues. Over 80% of our products that are sold in the market are packed in plastic bags. None of the bags are later recycled but simply become a permanent environment concern that flies, choke lines or are burnt and release toxic gases in the environment or form dumps of heaps. Lack of classification of the daily waste generated from houses and other places lead to worst clogging of the sewerage lines and formation of ponding areas which ultimately favor vector borne disease and sometimes become a cause of transmission of polio virus. Solid waste heaps were checked at different places of the cities. All of the wastes on visual assessments were classified into plastic bags, papers, broken plastic pots, clay pots, steel boxes, wrappers etc. All solid waste dumping sites in the cities and wastes that were thrown outside of the trash containers usually contained wrappers, plastic bags, and unconsumed food products. Insect populations seen in these sites included the house flies, bugs, cockroaches and mosquito larvae breeding in water filled wrappers, containers or plastic bags. The population of the mosquitoes, cockroaches and houseflies were relatively very high in dumping sites close to human population. This population has been associated with cases like dengue, malaria, dysentery, gastro and also to skin allergies during the monsoon and summer season. Thus, dumping of the huge amount of solid wastes in and near the residential areas results into serious environmental concerns, bad smell circulation, and health related issues. In some places, the same waste is burnt to get rid of mosquitoes through smoke which ultimately releases toxic material in the atmosphere. Therefore, a proper environmental strategy is needed to minimize environmental burden and promote concepts of recycled products and thus, reduce the disease burden.

Keywords: solid waste accumulation, disease burden, mosquitoes, vector borne diseases

Procedia PDF Downloads 278
299 Seismic Response Control of Multi-Span Bridge Using Magnetorheological Dampers

Authors: B. Neethu, Diptesh Das

Abstract:

The present study investigates the performance of a semi-active controller using magneto-rheological dampers (MR) for seismic response reduction of a multi-span bridge. The application of structural control to the structures during earthquake excitation involves numerous challenges such as proper formulation and selection of the control strategy, mathematical modeling of the system, uncertainty in system parameters and noisy measurements. These problems, however, need to be tackled in order to design and develop controllers which will efficiently perform in such complex systems. A control algorithm, which can accommodate un-certainty and imprecision compared to all the other algorithms mentioned so far, due to its inherent robustness and ability to cope with the parameter uncertainties and imprecisions, is the sliding mode algorithm. A sliding mode control algorithm is adopted in the present study due to its inherent stability and distinguished robustness to system parameter variation and external disturbances. In general a semi-active control scheme using an MR damper requires two nested controllers: (i) an overall system controller, which derives the control force required to be applied to the structure and (ii) an MR damper voltage controller which determines the voltage required to be supplied to the damper in order to generate the desired control force. In the present study a sliding mode algorithm is used to determine the desired optimal force. The function of the voltage controller is to command the damper to produce the desired force. The clipped optimal algorithm is used to find the command voltage supplied to the MR damper which is regulated by a semi active control law based on sliding mode algorithm. The main objective of the study is to propose a robust semi active control which can effectively control the responses of the bridge under real earthquake ground motions. Lumped mass model of the bridge is developed and time history analysis is carried out by solving the governing equations of motion in the state space form. The effectiveness of MR dampers is studied by analytical simulations by subjecting the bridge to real earthquake records. In this regard, it may also be noted that the performance of controllers depends, to a great extent, on the characteristics of the input ground motions. Therefore, in order to study the robustness of the controller in the present study, the performance of the controllers have been investigated for fourteen different earthquake ground motion records. The earthquakes are chosen in such a way that all possible characteristic variations can be accommodated. Out of these fourteen earthquakes, seven are near-field and seven are far-field. Also, these earthquakes are divided into different frequency contents, viz, low-frequency, medium-frequency, and high-frequency earthquakes. The responses of the controlled bridge are compared with the responses of the corresponding uncontrolled bridge (i.e., the bridge without any control devices). The results of the numerical study show that the sliding mode based semi-active control strategy can substantially reduce the seismic responses of the bridge showing a stable and robust performance for all the earthquakes.

Keywords: bridge, semi active control, sliding mode control, MR damper

Procedia PDF Downloads 124