Search results for: temporal resolution
605 Synthesis of (S)-Naproxen Based Amide Bond Forming Chiral Reagent and Application for Liquid Chromatographic Resolution of (RS)-Salbutamol
Authors: Poonam Malik, Ravi Bhushan
Abstract:
This work describes a very efficient approach for synthesis of activated ester of (S)-naproxen which was characterized by UV, IR, ¹HNMR, elemental analysis and polarimetric studies. It was used as a C-N bond forming chiral derivatizing reagent for further synthesis of diastereomeric amides of (RS)-salbutamol (a β₂ agonist that belongs to the group β-adrenolytic and is marketed as racamate) under microwave irradiation. The diastereomeric pair was separated by achiral phase HPLC, using mobile phase in gradient mode containing methanol and aqueous triethylaminephosphate (TEAP); separation conditions were optimized with respect to pH, flow rate, and buffer concentration and the method of separation was validated as per International Council for Harmonisation (ICH) guidelines. The reagent proved to be very effective for on-line sensitive detection of the diastereomers with very low limit of detection (LOD) values of 0.69 and 0.57 ng mL⁻¹ for diastereomeric derivatives of (S)- and (R)-salbutamol, respectively. The retention times were greatly reduced (2.7 min) with less consumption of organic solvents and large (α) as compared to literature reports. Besides, the diastereomeric derivatives were separated and isolated by preparative HPLC; these were characterized and were used as standard reference samples for recording ¹HNMR and IR spectra for determining absolute configuration and elution order; it ensured the success of diastereomeric synthesis and established the reliability of enantioseparation and eliminated the requirement of pure enantiomer of the analyte which is generally not available. The newly developed reagent can suitably be applied to several other amino group containing compounds either from organic syntheses or pharmaceutical industries because the presence of (S)-Npx as a strong chromophore would allow sensitive detection.This work is significant not only in the area of enantioseparation and determination of absolute configuration of diastereomeric derivatives but also in the area of developing new chiral derivatizing reagents (CDRs).Keywords: chiral derivatizing reagent, naproxen, salbutamol, synthesis
Procedia PDF Downloads 155604 Modelling of Solidification in a Latent Thermal Energy Storage with a Finned Tube Bundle Heat Exchanger Unit
Authors: Remo Waser, Simon Maranda, Anastasia Stamatiou, Ludger J. Fischer, Joerg Worlitschek
Abstract:
In latent heat storage, a phase change material (PCM) is used to store thermal energy. The heat transfer rate during solidification is limited and considered as a key challenge in the development of latent heat storages. Thus, finned heat exchangers (HEX) are often utilized to increase the heat transfer rate of the storage system. In this study, a new modeling approach to calculating the heat transfer rate in latent thermal energy storages with complex HEX geometries is presented. This model allows for an optimization of the HEX design in terms of costs and thermal performance of the system. Modeling solidification processes requires the calculation of time-dependent heat conduction with moving boundaries. Commonly used computational fluid dynamic (CFD) methods enable the analysis of the heat transfer in complex HEX geometries. If applied to the entire storage, the drawback of this approach is the high computational effort due to small time steps and fine computational grids required for accurate solutions. An alternative to describe the process of solidification is the so-called temperature-based approach. In order to minimize the computational effort, a quasi-stationary assumption can be applied. This approach provides highly accurate predictions for tube heat exchangers. However, it shows unsatisfactory results for more complex geometries such as finned tube heat exchangers. The presented simulation model uses a temporal and spatial discretization of heat exchanger tube. The spatial discretization is based on the smallest possible symmetric segment of the HEX. The heat flow in each segment is calculated using finite volume method. Since the heat transfer fluid temperature can be derived using energy conservation equations, the boundary conditions at the inner tube wall is dynamically updated for each time step and segment. The model allows a prediction of the thermal performance of latent thermal energy storage systems using complex HEX geometries with considerably low computational effort.Keywords: modelling of solidification, finned tube heat exchanger, latent thermal energy storage
Procedia PDF Downloads 268603 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry
Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine
Abstract:
The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).Keywords: bottom elevation, MVS, river, SfM
Procedia PDF Downloads 299602 Democratising Rivers: Local River Conflicts in Rajasthan
Authors: Renu Sisodia
Abstract:
This paper attempted to explore and explain the local level river water conflicts in the larger context of state - society relations. This study also covered causes of local level river water conflicts in the catchment area of Bandi and Arvari river of Rajasthan. The focus of the study was on the emergence of community driven, decentralised management of river water bodies and strategies used by local communities to protect and manage river water conflicts. The research is conducted through the process of designing a framework based on essential theoretical and practical findings supported by primary and secondary data. Two in depth case study is conducted to understand the phenomenon in depth. The first field site is Bandi River of Pali district, which is about the struggle between textile industries, community and the State government in which water pollution is said to be one of the driving force of the conflict. Finding shows that the state is supporting textile industries in Pali district have not been adherent to the environmental ethics. Present legal infrastructure and local institutions fail to resolve the serious problem of water pollution in Bandi River and its adverse impact on the local community as a result local community resistance against the local administration and the state government. The second case illustrates the plight of Arvari River in Alwar district. Tussle for the ownership of fisheries between local community, the private fish contractor and State government has been the main bone of contestation. To resolve this conflict local community formed conflict management mechanism named as Arvari Parliament. Arvari Parliament has its own principle and rules to resolve water conflicts related to ownership of the river and use of the river water. The research findings also highlight the co-existence between conventional and modern practices in resolving conflicts.Keywords: water, water pollution, water conflicts, water scarcity, conflict resolution, local community
Procedia PDF Downloads 486601 Determining the Extent and Direction of Relief Transformations Caused by Ski Run Construction Using LIDAR Data
Authors: Joanna Fidelus-Orzechowska, Dominika Wronska-Walach, Jaroslaw Cebulski
Abstract:
Mountain areas are very often exposed to numerous transformations connected with the development of tourist infrastructure. In mountain areas in Poland ski tourism is very popular, so agricultural areas are often transformed into tourist areas. The construction of new ski runs can change the direction and rate of slope development. The main aim of this research was to determine geomorphological and hydrological changes within slopes caused by ski run constructions. The study was conducted in the Remiaszów catchment in the Inner Polish Carpathians (southern Poland). The mean elevation of the catchment is 859 m a.s.l. and the maximum is 946 m a.s.l. The surface area of the catchment is 1.16 km2, of which 16.8% is the area of the two studied ski runs. The studied ski runs were constructed in 2014 and 2015. In order to determine the relief transformations connected with new ski run construction high resolution LIDAR data was analyzed. The general relief changes in the studied catchment were determined on the basis of ALS (Airborne Laser Scanning ) data obtained before (2013) and after (2016) ski run construction. Based on the two sets of ALS data a digital elevation models of differences (DoDs) was created, which made it possible to determine the quantitative relief changes in the entire studied catchment. Additionally, cross and longitudinal profiles were calculated within slopes where new ski runs were built. Detailed data on relief changes within selected test surfaces was obtained based on TLS (Terrestrial Laser Scanning). Hydrological changes within the analyzed catchment were determined based on the convergence and divergence index. The study shows that the construction of the new ski runs caused significant geomorphological and hydrological changes in the entire studied catchment. However, the most important changes were identified within the ski slopes. After the construction of ski runs the entire catchment area lowered about 0.02 m. Hydrological changes in the studied catchment mainly led to the interruption of surface runoff pathways and changes in runoff direction and geometry.Keywords: hydrological changes, mountain areas, relief transformations, ski run construction
Procedia PDF Downloads 143600 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories
Authors: Mariana Lima, Rodrigo Silva, Victor Stange, Teodiano Bastos
Abstract:
Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.Keywords: database, forensic genetics, genetic analysis, sample management, software solution
Procedia PDF Downloads 370599 A Ku/K Band Power Amplifier for Wireless Communication and Radar Systems
Authors: Meng-Jie Hsiao, Cam Nguyen
Abstract:
Wide-band devices in Ku band (12-18 GHz) and K band (18-27 GHz) have received significant attention for high-data-rate communications and high-resolution sensing. Especially, devices operating around 24 GHz is attractive due to the 24-GHz unlicensed applications. One of the most important components in RF systems is power amplifier (PA). Various PAs have been developed in the Ku and K bands on GaAs, InP, and silicon (Si) processes. Although the PAs using GaAs or InP process could have better power handling and efficiency than those realized on Si, it is very hard to integrate the entire system on the same substrate for GaAs or InP. Si, on the other hand, facilitates single-chip systems. Hence, good PAs on Si substrate are desirable. Especially, Si-based PA having good linearity is necessary for next generation communication protocols implemented on Si. We report a 16.5 to 25.5 GHz Si-based PA having flat saturated power of 19.5 ± 1.5 dBm, output 1-dB power compression (OP1dB) of 16.5 ± 1.5 dBm, and 15-23 % power added efficiency (PAE). The PA consists of a drive amplifier, two main amplifiers, and lump-element Wilkinson power divider and combiner designed and fabricated in TowerJazz 0.18µm SiGe BiCMOS process having unity power gain frequency (fMAX) of more than 250 GHz. The PA is realized as a cascode amplifier implementing both heterojunction bipolar transistor (HBT) and n-channel metal–oxide–semiconductor field-effect transistor (NMOS) devices for gain, frequency response, and linearity consideration. Particularly, a body-floating technique is utilized for the NMOS devices to improve the voltage swing and eliminate parasitic capacitances. The developed PA has measured flat gain of 20 ± 1.5 dB across 16.5-25.5 GHz. At 24 GHz, the saturated power, OP1dB, and maximum PAE are 20.8 dBm, 18.1 dBm, and 23%, respectively. Its high performance makes it attractive for use in Ku/K-band, especially 24 GHz, communication and radar systems. This paper was made possible by NPRP grant # 6-241-2-102 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.Keywords: power amplifiers, amplifiers, communication systems, radar systems
Procedia PDF Downloads 111598 Computational Study on Traumatic Brain Injury Using Magnetic Resonance Imaging-Based 3D Viscoelastic Model
Authors: Tanu Khanuja, Harikrishnan N. Unni
Abstract:
Head is the most vulnerable part of human body and may cause severe life threatening injuries. As the in vivo brain response cannot be recorded during injury, computational investigation of the head model could be really helpful to understand the injury mechanism. Majority of the physical damage to living tissues are caused by relative motion within the tissue due to tensile and shearing structural failures. The present Finite Element study focuses on investigating intracranial pressure and stress/strain distributions resulting from impact loads on various sites of human head. This is performed by the development of the 3D model of a human head with major segments like cerebrum, cerebellum, brain stem, CSF (cerebrospinal fluid), and skull from patient specific MRI (magnetic resonance imaging). The semi-automatic segmentation of head is performed using AMIRA software to extract finer grooves of the brain. To maintain the accuracy high number of mesh elements are required followed by high computational time. Therefore, the mesh optimization has also been performed using tetrahedral elements. In addition, model validation with experimental literature is performed as well. Hard tissues like skull is modeled as elastic whereas soft tissues like brain is modeled with viscoelastic prony series material model. This paper intends to obtain insights into the severity of brain injury by analyzing impacts on frontal, top, back, and temporal sites of the head. Yield stress (based on von Mises stress criterion for tissues) and intracranial pressure distribution due to impact on different sites (frontal, parietal, etc.) are compared and the extent of damage to cerebral tissues is discussed in detail. This paper finds that how the back impact is more injurious to overall head than the other. The present work would be helpful to understand the injury mechanism of traumatic brain injury more effectively.Keywords: dynamic impact analysis, finite element analysis, intracranial pressure, MRI, traumatic brain injury, von Misses stress
Procedia PDF Downloads 163597 Assimilating Multi-Mission Satellites Data into a Hydrological Model
Authors: Mehdi Khaki, Ehsan Forootan, Joseph Awange, Michael Kuhn
Abstract:
Terrestrial water storage, as a source of freshwater, plays an important role in human lives. Hydrological models offer important tools for simulating and predicting water storages at global and regional scales. However, their comparisons with 'reality' are imperfect mainly due to a high level of uncertainty in input data and limitations in accounting for all complex water cycle processes, uncertainties of (unknown) empirical model parameters, as well as the absence of high resolution (both spatially and temporally) data. Data assimilation can mitigate this drawback by incorporating new sets of observations into models. In this effort, we use multi-mission satellite-derived remotely sensed observations to improve the performance of World-Wide Water Resources Assessment system (W3RA) hydrological model for estimating terrestrial water storages. For this purpose, we assimilate total water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) and surface soil moisture data from the Advanced Microwave Scanning Radiometer for the Earth Observing System (AMSR-E) into W3RA. This is done to (i) improve model estimations of water stored in ground and soil moisture, and (ii) assess the impacts of each satellite of data (from GRACE and AMSR-E) and their combination on the final terrestrial water storage estimations. These data are assimilated into W3RA using the Ensemble Square-Root Filter (EnSRF) filtering technique over Mississippi Basin (the United States) and Murray-Darling Basin (Australia) between 2002 and 2013. In order to evaluate the results, independent ground-based groundwater and soil moisture measurements within each basin are used.Keywords: data assimilation, GRACE, AMSR-E, hydrological model, EnSRF
Procedia PDF Downloads 289596 Integrating Dynamic Brain Connectivity and Transcriptomic Imaging in Major Depressive Disorder
Authors: Qingjin Liu, Jinpeng Niu, Kangjia Chen, Jiao Li, Huafu Chen, Wei Liao
Abstract:
Functional connectomics is essential in cognitive science and neuropsychiatry, offering insights into the brain's complex network structures and dynamic interactions. Although neuroimaging has uncovered functional connectivity issues in Major Depressive Disorder (MDD) patients, the dynamic shifts in connectome topology and their link to gene expression are yet to be fully understood. To explore the differences in dynamic connectome topology between MDD patients and healthy individuals, we conducted an extensive analysis of resting-state functional magnetic resonance imaging (fMRI) data from 434 participants (226 MDD patients and 208 controls). We used multilayer network models to evaluate brain module dynamics and examined the association between whole-brain gene expression and dynamic module variability in MDD using publicly available transcriptomic data. Our findings revealed that compared to healthy individuals, MDD patients showed lower global mean values and higher standard deviations, indicating unstable patterns and increased regional differentiation. Notably, MDD patients exhibited more frequent module switching, primarily within the executive control network (ECN), particularly in the left dorsolateral prefrontal cortex and right fronto-insular regions, whereas the default mode network (DMN), including the superior frontal gyrus, temporal lobe, and right medial prefrontal cortex, displayed lower variability. These brain dynamics predicted the severity of depressive symptoms. Analyzing human brain gene expression data, we found that the spatial distribution of MDD-related gene expression correlated with dynamic module differences. Cell type-specific gene analyses identified oligodendrocytes (OPCs) as major contributors to the transcriptional relationships underlying module variability in MDD. To the best of our knowledge, this is the first comprehensive description of altered brain module dynamics in MDD patients linked to depressive symptom severity and changes in whole-brain gene expression profiles.Keywords: major depressive disorder, module dynamics, magnetic resonance imaging, transcriptomic
Procedia PDF Downloads 26595 A Hedonic Valuation Approach to Valuing Combined Sewer Overflow Reductions
Authors: Matt S. Van Deren, Michael Papenfus
Abstract:
Seattle is one of the hundreds of cities in the United States that relies on a combined sewer system to collect and convey municipal wastewater. By design, these systems convey all wastewater, including industrial and commercial wastewater, human sewage, and stormwater runoff, through a single network of pipes. Serious problems arise for combined sewer systems during heavy precipitation events when treatment plants and storage facilities are unable to accommodate the influx of wastewater needing treatment, causing the sewer system to overflow into local waterways through sewer outfalls. CSOs (Combined Sewer Overflows) pose a serious threat to human and environmental health. Principal pollutants found in CSO discharge include microbial pathogens, comprising of bacteria, viruses, parasites, oxygen-depleting substances, suspended solids, chemicals or chemical mixtures, and excess nutrients, primarily nitrogen and phosphorus. While concentrations of these pollutants can vary between overflow events, CSOs have the potential to spread disease and waterborne illnesses, contaminate drinking water supplies, disrupt aquatic life, and effect a waterbody’s designated use. This paper estimates the economic impact of CSOs on residential property values. Using residential property sales data from Seattle, Washington, this paper employs a hedonic valuation model that controls for housing and neighborhood characteristics, as well as spatial and temporal effects, to predict a consumer’s willingness to pay for improved water quality near their homes. Initial results indicate that a 100,000-gallon decrease in the average annual overflow discharged from a sewer outfall within 300 meters of a home is associated with a 0.053% increase in the property’s sale price. For the average home in the sample, the price increase is estimated to be $18,860.23. These findings reveal some of the important economic benefits of improving water quality by reducing the frequency and severity of combined sewer overflows.Keywords: benefits, hedonic, Seattle, sewer
Procedia PDF Downloads 177594 Comparing the Apparent Error Rate of Gender Specifying from Human Skeletal Remains by Using Classification and Cluster Methods
Authors: Jularat Chumnaul
Abstract:
In forensic science, corpses from various homicides are different; there are both complete and incomplete, depending on causes of death or forms of homicide. For example, some corpses are cut into pieces, some are camouflaged by dumping into the river, some are buried, some are burned to destroy the evidence, and others. If the corpses are incomplete, it can lead to the difficulty of personally identifying because some tissues and bones are destroyed. To specify gender of the corpses from skeletal remains, the most precise method is DNA identification. However, this method is costly and takes longer so that other identification techniques are used instead. The first technique that is widely used is considering the features of bones. In general, an evidence from the corpses such as some pieces of bones, especially the skull and pelvis can be used to identify their gender. To use this technique, forensic scientists are required observation skills in order to classify the difference between male and female bones. Although this technique is uncomplicated, saving time and cost, and the forensic scientists can fairly accurately determine gender by using this technique (apparently an accuracy rate of 90% or more), the crucial disadvantage is there are only some positions of skeleton that can be used to specify gender such as supraorbital ridge, nuchal crest, temporal lobe, mandible, and chin. Therefore, the skeletal remains that will be used have to be complete. The other technique that is widely used for gender specifying in forensic science and archeology is skeletal measurements. The advantage of this method is it can be used in several positions in one piece of bones, and it can be used even if the bones are not complete. In this study, the classification and cluster analysis are applied to this technique, including the Kth Nearest Neighbor Classification, Classification Tree, Ward Linkage Cluster, K-mean Cluster, and Two Step Cluster. The data contains 507 particular individuals and 9 skeletal measurements (diameter measurements), and the performance of five methods are investigated by considering the apparent error rate (APER). The results from this study indicate that the Two Step Cluster and Kth Nearest Neighbor method seem to be suitable to specify gender from human skeletal remains because both yield small apparent error rate of 0.20% and 4.14%, respectively. On the other hand, the Classification Tree, Ward Linkage Cluster, and K-mean Cluster method are not appropriate since they yield large apparent error rate of 10.65%, 10.65%, and 16.37%, respectively. However, there are other ways to evaluate the performance of classification such as an estimate of the error rate using the holdout procedure or misclassification costs, and the difference methods can make the different conclusions.Keywords: skeletal measurements, classification, cluster, apparent error rate
Procedia PDF Downloads 252593 Implementing of Indoor Air Quality Index in Hong Kong
Authors: Kwok W. Mui, Ling T. Wong, Tsz W. Tsang
Abstract:
Many Hong Kong people nowadays spend most of their lifetime working indoor. Since poor Indoor Air Quality (IAQ) potentially leads to discomfort, ill health, low productivity and even absenteeism in workplaces, a call for establishing statutory IAQ control to safeguard the well-being of residents is urgently required. Although policies, strategies, and guidelines for workplace IAQ diagnosis have been developed elsewhere and followed with remedial works, some of those workplaces or buildings have relatively late stage of the IAQ problems when the investigation or remedial work started. Screening for IAQ problems should be initiated as it will provide information as a minimum provision of IAQ baseline requisite to the resolution of the problems. It is not practical to sample all air pollutants that exit. Nevertheless, as a statutory control, reliable, rapid screening is essential in accordance with a compromise strategy, which balances costs against detection of key pollutants. This study investigates the feasibility of using an IAQ index as a parameter of IAQ control in Hong Kong. The index is a screening parameter to identify the unsatisfactory workplace IAQ and will highlight where a fully effective IAQ monitoring and assessment is needed for an intensive diagnosis. There already exist a number of representative common indoor pollutants based on some extensive IAQ assessments. The selection of pollutants is surrogate to IAQ control consists of dilution, mitigation, and emission control. The IAQ Index and assessment will look at high fractional quantities of these common measurement parameters. With the support of the existing comprehensive regional IAQ database and the IAQ Index by the research team as the pre-assessment probability, and the unsatisfactory IAQ prevalence as the post-assessment probability from this study, thresholds of maintaining the current measures and performing a further IAQ test or IAQ remedial measures will be proposed. With justified resources, the proposed IAQ Index and assessment protocol might be a useful tool for setting up a practical public IAQ surveillance programme and policy in Hong Kong.Keywords: assessment, index, indoor air quality, surveillance programme
Procedia PDF Downloads 267592 Thermal and Solar Performances of Adsorption Solar Refrigerating Machine
Authors: Nadia Allouache
Abstract:
Solar radiation is by far the largest and the most world’s abundant, clean and permanent energy source. The amount of solar radiation intercepted by the Earth is much higher than annual global energy use. The energy available from the sun is greater than about 5200 times the global world’s need in 2006. In recent years, many promising technologies have been developed to harness the sun's energy. These technologies help in environmental protection, economizing energy, and sustainable development, which are the major issues of the world in the 21st century. One of these important technologies is the solar cooling systems that make use of either absorption or adsorption technologies. The solar adsorption cooling systems are good alternative since they operate with environmentally benign refrigerants that are natural, free from CFCs, and therefore they have a zero ozone depleting potential (ODP). A numerical analysis of thermal and solar performances of an adsorption solar refrigerating system using different adsorbent/adsorbate pairs such as activated carbon AC35 and activated carbon BPL/Ammoniac; is undertaken in this study. The modeling of the adsorption cooling machine requires the resolution of the equation describing the energy and mass transfer in the tubular adsorber that is the most important component of the machine. The Wilson and Dubinin- Astakhov models of the solid-adsorbat equilibrium are used to calculate the adsorbed quantity. The porous medium is contained in the annular space and the adsorber is heated by solar energy. Effect of key parameters on the adsorbed quantity and on the thermal and solar performances are analysed and discussed. The performances of the system that depends on the incident global irradiance during a whole day depends on the weather conditions: the condenser temperature and the evaporator temperature. The AC35/methanol pair is the best pair comparing to the BPL/Ammoniac in terms of system performances.Keywords: activated carbon-methanol pair, activated carbon-ammoniac pair, adsorption, performance coefficients, numerical analysis, solar cooling system
Procedia PDF Downloads 72591 Task Based Functional Connectivity within Reward Network in Food Image Viewing Paradigm Using Functional MRI
Authors: Preetham Shankapal, Jill King, Kori Murray, Corby Martin, Paula Giselman, Jason Hicks, Owen Carmicheal
Abstract:
Activation of reward and satiety networks in the brain while processing palatable food cues, as well as functional connectivity during rest has been studied using functional Magnetic Resonance Imaging of the brain in various obesity phenotypes. However, functional connectivity within the reward and satiety network during food cue processing is understudied. 14 obese individuals underwent two fMRI scans during viewing of Macronutrient Picture System images. Each scan included two blocks of images of High Sugar/High Fat (HSHF), High Carbohydrate/High Fat (HCHF), Low Sugar/Low Fat (LSLF) and also non-food images. Seed voxels within seven food reward relevant ROIs: Insula, putamen and cingulate, precentral, parahippocampal, medial frontal and superior temporal gyri were isolated based on a prior meta-analysis. Beta series correlation for task-related functional connectivity between these seed voxels and the rest of the brain was computed. Voxel-level differences in functional connectivity were calculated between: first and the second scan; individuals who saw novel (N=7) vs. Repeated (N=7) images in the second scan; and between the HC/HF, HSHF blocks vs LSLF and non-food blocks. Computations and analysis showed that during food image viewing, reward network ROIs showed significant functional connectivity with each other and with other regions responsible for attentional and motor control, including inferior parietal lobe and precentral gyrus. These functional connectivity values were heightened among individuals who viewed novel HS/HF images in the second scan. In the second scan session, functional connectivity was reduced within the reward network but increased within attention, memory and recognition regions, suggesting habituation to reward properties and increased recollection of previously viewed images. In conclusion it can be inferred that Functional Connectivity within reward network and between reward and other brain regions, varies by important experimental conditions during food photography viewing, including habituation to shown foods.Keywords: fMRI, functional connectivity, task-based, beta series correlation
Procedia PDF Downloads 270590 Interior Architecture in the Anthropocene: Engaging the Subnature through the Intensification of Body-Surface Interaction
Authors: Verarisa Ujung
Abstract:
The Anthropocene – as scientists define as a new geological epoch where human intervention has the dominant influence on the geological, atmospheric, and ecological processes challenges the contemporary discourse in architecture and interior. The dominant influence characterises the incapability to distinguish the notion of nature, subnature, human and non-human. Consequently, living in the Anthropocene demands sensitivity and responsiveness to heighten our sense of the rhythm of transformation and recognition of our environment as a product of natural, social and historical processes. The notion of subnature is particularly emphasised in this paper to investigate the poetic sense of living with subnature. It could be associated with the critical tool for exploring the aesthetic and programmatic implications of subnature on interiority. The ephemeral immaterial attached to subnature promotes the sense of atmospheric delineation of interiority, the very inner significance of body-surface interaction, which central to interior architecture discourse. This would then reflect human’s activities; examine the transformative change, the architectural motion and the traces that left between moments. In this way, engaging the notion of subnature enable us to better understand the critical subject on interiority and might provide an in-depth study on interior architecture. Incorporating the exploration on the form, materiality, and pattern of subnature, this research seeks to grasp the inner significance of micro to macro approaches so that the future of interior might be compelled to depend more on the investigation and development of responsive environment. To reflect upon the form, materiality and intensity of subnature that specifically characterized by the natural, social and historical processes, this research examines a volcanic land, White Island/Whakaari, New Zealand as the chosen site of investigation. Emitting various forms and intensities of subnatures - smokes, mud, sulphur gas, this volcanic land also open to the new inhabitation within the sulphur factory ruins that reflects human’s past occupation. In this way, temporal and natural selected manifestations of materiality, artefact, and performance can be traced out and might reveal the meaningful relations among space, inhabitation, and well-being of inhabitants in the Anthropocene.Keywords: anthropocene, body, intensification, intensity, interior architecture, subnature, surface
Procedia PDF Downloads 176589 Nanoscale Mapping of the Mechanical Modifications Occurring in the Brain Tumour Microenvironment by Atomic Force Microscopy: The Case of the Highly Aggressive Glioblastoma and the Slowly Growing Meningioma
Authors: Gabriele Ciasca, Tanya E. Sassun, Eleonora Minelli, Manila Antonelli, Massimiliano Papi, Antonio Santoro, Felice Giangaspero, Roberto Delfini, Marco De Spirito
Abstract:
Glioblastoma multiforme (GBM) is an extremely aggressive brain tumor, characterized by a diffuse infiltration of neoplastic cells into the brain parenchyma. Although rarely considered, mechanical cues play a key role in the infiltration process that is extensively mediated by the tumor microenvironment stiffness and, more in general, by the occurrence of aberrant interactions between neoplastic cells and the extracellular matrix (ECM). Here we provide a nano-mechanical characterization of the viscoelastic response of human GBM tissues by indentation-type atomic force microscopy. High-resolution elasticity maps show a large difference between the biomechanics of GBM tissues and the healthy peritumoral regions, opening possibilities to optimize the tumor resection area. Moreover, we unveil the nanomechanical signature of necrotic regions and anomalous vasculature, that are two major hallmarks useful for glioma staging. Actually, the morphological grading of GBM relies mainly on histopathological findings that make extensive use of qualitative parameters. Our findings have the potential to positively impact on the development of novel quantitative methods to assess the tumor grade, which can be used in combination with conventional histopathological examinations. In order to provide a more in-depth description of the role of mechanical cues in tumor progression, we compared the nano-mechanical fingerprint of GBM tissues with that of grade-I (WHO) meningioma, a benign lesion characterized by a completely different growth pathway with the respect to GBM, that, in turn hints at a completely different role of the biomechanical interactions.Keywords: AFM, nano-mechanics, nanomedicine, brain tumors, glioblastoma
Procedia PDF Downloads 341588 Blue Finance: A Systematical Review of the Academic Literature on Investment Streams for Marine Conservation
Authors: David Broussard
Abstract:
This review article delves into the realm of marine conservation finance, addressing the inadequacies in current financial streams from the private sector and the underutilization of existing financing mechanisms. The study emphasizes the emerging field of “blue finance”, which contributes to economic growth, improved livelihoods, and marine ecosystem health. The financial burden of marine conservation projects typically falls on philanthropists and governments, contrary to the polluter-pays principle. However, the private sector’s increasing commitment to NetZero and growing environmental and social responsibility goals prompts the need for alternative funding sources for marine conservation initiatives like marine protected areas. The article explores the potential of utilizing several financing mechanisms like carbon credits and other forms of payment for ecosystem services in the marine context, providing a solution to the lack of private funding for marine conservation. The methodology employed involves a systematic and quantitative approach, combining traditional review methods and elements of meta-analysis. A comprehensive search of the years 2000 - 2023, using relevant keywords on the Scopus platform, resulted in a review of 252 articles. The temporal evolution of blue finance studies reveals a significant increase in annual articles from 2010 to 2022, with notable peaks in 2011 and 2022. Marine Policy, Ecosystem Services, and Frontiers in Marine Science are prominent journals in this field. While the majority of articles focus on payment for ecosystem services, there is a growing awareness of the need for holistic approaches in conservation finance. Utilizing bibliometric techniques, the article showcases the dominant share of payment for ecosystem services in the literature with a focus on blue carbon. The classification of articles based on various criteria, including financing mechanisms and conservation types, aids in categorizing and understanding the diversity of research objectives and perspectives in this complex field of marine conservation finance.Keywords: biodiversity offsets, carbon credits, ecosystem services, impact investment, payment for ecosystem services
Procedia PDF Downloads 84587 A Numerical Model for Simulation of Blood Flow in Vascular Networks
Authors: Houman Tamaddon, Mehrdad Behnia, Masud Behnia
Abstract:
An accurate study of blood flow is associated with an accurate vascular pattern and geometrical properties of the organ of interest. Due to the complexity of vascular networks and poor accessibility in vivo, it is challenging to reconstruct the entire vasculature of any organ experimentally. The objective of this study is to introduce an innovative approach for the reconstruction of a full vascular tree from available morphometric data. Our method consists of implementing morphometric data on those parts of the vascular tree that are smaller than the resolution of medical imaging methods. This technique reconstructs the entire arterial tree down to the capillaries. Vessels greater than 2 mm are obtained from direct volume and surface analysis using contrast enhanced computed tomography (CT). Vessels smaller than 2mm are reconstructed from available morphometric and distensibility data and rearranged by applying Murray’s Laws. Implementation of morphometric data to reconstruct the branching pattern and applying Murray’s Laws to every vessel bifurcation simultaneously, lead to an accurate vascular tree reconstruction. The reconstruction algorithm generates full arterial tree topography down to the first capillary bifurcation. Geometry of each order of the vascular tree is generated separately to minimize the construction and simulation time. The node-to-node connectivity along with the diameter and length of every vessel segment is established and order numbers, according to the diameter-defined Strahler system, are assigned. During the simulation, we used the averaged flow rate for each order to predict the pressure drop and once the pressure drop is predicted, the flow rate is corrected to match the computed pressure drop for each vessel. The final results for 3 cardiac cycles is presented and compared to the clinical data.Keywords: blood flow, morphometric data, vascular tree, Strahler ordering system
Procedia PDF Downloads 272586 Modelling of Exothermic Reactions during Carbon Fibre Manufacturing and Coupling to Surrounding Airflow
Authors: Musa Akdere, Gunnar Seide, Thomas Gries
Abstract:
Carbon fibres are fibrous materials with a carbon atom amount of more than 90%. They combine excellent mechanicals properties with a very low density. Thus carbon fibre reinforced plastics (CFRP) are very often used in lightweight design and construction. The precursor material is usually polyacrylonitrile (PAN) based and wet-spun. During the production of carbon fibre, the precursor has to be stabilized thermally to withstand the high temperatures of up to 1500 °C which occur during carbonization. Even though carbon fibre has been used since the late 1970s in aerospace application, there is still no general method available to find the optimal production parameters and the trial-and-error approach is most often the only resolution. To have a much better insight into the process the chemical reactions during stabilization have to be analyzed particularly. Therefore, a model of the chemical reactions (cyclization, dehydration, and oxidation) based on the research of Dunham and Edie has been developed. With the presented model, it is possible to perform a complete simulation of the fibre undergoing all zones of stabilization. The fiber bundle is modeled as several circular fibers with a layer of air in-between. Two thermal mechanisms are considered to be the most important: the exothermic reactions inside the fiber and the convective heat transfer between the fiber and the air. The exothermic reactions inside the fibers are modeled as a heat source. Differential scanning calorimetry measurements have been performed to estimate the amount of heat of the reactions. To shorten the required time of a simulation, the number of fibers is decreased by similitude theory. Experiments were conducted to validate the simulation results of the fibre temperature during stabilization. The experiments for the validation were conducted on a pilot scale stabilization oven. To measure the fibre bundle temperature, a new measuring method is developed. The comparison of the results shows that the developed simulation model gives good approximations for the temperature profile of the fibre bundle during the stabilization process.Keywords: carbon fibre, coupled simulation, exothermic reactions, fibre-air-interface
Procedia PDF Downloads 273585 A Geophysical Study for Delineating the Subsurface Minerals at El Qusier Area, Central Eastern Desert, Egypt
Authors: Ahmed Khalil, Elhamy Tarabees, Svetlana Kovacikova
Abstract:
The Red Sea Mountains have been famous for their ore deposits since ancient times. Also, petrographic analysis and previous potential field surveys indicated large unexplored accumulations of ore minerals in the area. Therefore, the main goal of the presented study is to contribute to the discovery of hitherto unknown ore mineral deposits in the Red Sea region. To achieve this goal, we used two geophysical techniques: land magnetic survey and magnetotelluric data. A high-resolution land magnetic survey has been acquired using two proton magnetometers, one instrument used as a base station for the diurnal correction and the other used to measure the magnetic field along the study area. Two hundred eighty land magnetic stations were measured over a mesh-like area with a 500m spacing interval. The necessary reductions concerning daily variation, regional gradient and time observation were applied. Then, the total intensity anomaly map was constructed and transformed into the reduced magnetic pole (RTP). The magnetic interpretation was carried out using the analytical signal as well as regional–residual separation is carried out using the power spectrum. Also, the tilt derivative method (TDR) technique is applied to delineate the structure and hidden anomalies. Data analysis has been performed using trend analysis and Euler deconvolution. The results indicate that magnetic contacts are not the dominant geological feature of the study area. The magnetotleruric survey consisted of two profiles with a total of 8 broadband measurement points with a duration of about 24 hours crossing a wadi um Gheig approximately 50 km south of El Quseir. Collected data have been inverted to the electrical resistivity model using the 3D modular 3D inversion technique ModEM. The model revealed a non-conductive body in its central part, probably corresponding to a dolerite dyke, with which possible ore mineralization could be related.Keywords: magnetic survey, magnetotelluric, mineralization, 3d modeling
Procedia PDF Downloads 27584 Imaginal and in Vivo Exposure Blended with Emdr: Becoming Unstuck, an Integrated Inpatient Treatment for Post-Traumatic Stress Disorder
Authors: Merrylord Harb-Azar
Abstract:
Traditionally, PTSD treatment has involved trauma-focused cognitive behaviour therapy (TF CBT) to consolidate traumatic memories. A piloted integrated treatment of TF CBT and eye movement desensitisation reprocessing therapy (EMDR) of eight phases will fasten the rate memory is being consolidated and enhance cognitive functioning in patients with PTSD. Patients spend a considerable amount of time in treatment managing their traumas experienced firsthand, or from aversive details ranging from war, assaults, accidents, abuse, hostage related, riots, or natural disasters. The time spent in treatment or as inpatient affects overall quality of life, relationships, cognitive functioning, and overall sense of identity. EMDR is being offered twice a week in conjunction with the standard prolonged exposure as an inpatient in a private hospital. Prolonged exposure for up to 5 hours per day elicits the affect response required for EMDR sessions in the afternoon to unlock unprocessed memories and facilitate consolidation in the amygdala and hippocampus. Results are indicating faster consolidation of memories, reduction in symptoms in a shorter period of time, reduction in admission time, which is enhancing the quality of life and relationships, and improved cognition. The impact of events scale (IES) results demonstrate a significant reduction in symptoms, trauma symptoms inventory (TSI), and posttraumatic stressor disorder check list (PCL) that demonstrates large effect sizes to date. An integrated treatment approach for PTSD achieves a faster resolution of memories, improves cognition, and reduces the amount of time spent in therapy.Keywords: EMDR enhances cognitive functioning, faster consolidation of trauma memory, integrated treatment of TF CBT and EMDR, reduction in inpatient admission time
Procedia PDF Downloads 145583 Motif Search-Aided Screening of the Pseudomonas syringae pv. Maculicola Genome for Genes Encoding Tertiary Alcohol Ester Hydrolases
Authors: M. L. Mangena, N. Mokoena, K. Rashamuse, M. G. Tlou
Abstract:
Tertiary alcohol ester (TAE) hydrolases are a group of esterases (EC 3.1.1.-) that catalyze the kinetic resolution of TAEs and as a result, they are sought-after for the production of optically pure tertiary alcohols (TAs) which are useful as building blocks for number biologically active compounds. What sets these enzymes apart is, the presence of a GGG(A)X-motif in the active site which appears to be the main reason behind their activity towards the sterically demanding TAEs. The genome of Pseudomonas syringae pv. maculicola (Psm) comprises a multitude of genes that encode esterases. We therefore, hypothesize that some of these genes encode TAE hydrolases. In this study, Psm was screened for TAE hydrolase activity using the linalyl acetate (LA) plate assay and a positive reaction was observed. As a result, the genome of Psm was screened for esterases with a GGG(A)X-motif using the motif search tool and two potential TAE hydrolase genes (PsmEST1 and 2, 1100 and 1000bp, respectively) were identified, PsmEST1 was amplified by PCR and the gene sequenced for confirmation. Analysis of the sequence data with the SingnalP 4.1 server revealed that the protein comprises a signal peptide (22 amino acid residues) on the N-terminus. Primers specific for the gene encoding the mature protein (without the signal peptide) were designed such that they contain NdeI and XhoI restriction sites for directional cloning of the PCR products into pET28a. The gene was expressed in E. coli JM109 (DE3) and the clones screened for TAE hydrolase activity using the LA plate assay. A positive clone was selected, overexpressed and the protein purified using nickel affinity chromatography. The activity of the esterase towards LA was confirmed using thin layer chromatography.Keywords: hydrolases, tertiary alcohol esters, tertiary alcohols, screening, Pseudomonas syringae pv., maculicola genome, esterase activity, linalyl acetate
Procedia PDF Downloads 355582 Generation of Ultra-Broadband Supercontinuum Ultrashort Laser Pulses with High Energy
Authors: Walid Tawfik
Abstract:
The interaction of intense short nano- and picosecond laser pulses with plasma leads to reach variety of important applications, including time-resolved laser induced breakdown spectroscopy (LIBS), soft x-ray lasers, and laser-driven accelerators. The progress in generating of femtosecond down to sub-10 fs optical pulses has opened a door for scientists with an essential tool in many ultrafast phenomena, such as femto-chemistry, high field physics, and high harmonic generation (HHG). The advent of high-energy laser pulses with durations of few optical cycles provided scientists with very high electric fields, and produce coherent intense UV to NIR radiation with high energy which allows for the investigation of ultrafast molecular dynamics with femtosecond resolution. In this work, we could experimentally achieve the generation of a two-octave-wide supercontinuum ultrafast pulses extending from ultraviolet at 3.5 eV to the near-infrared at 1.3 eV in neon-filled capillary fiber. These pulses are created due to nonlinear self-phase modulation (SPM) in neon as a nonlinear medium. The measurements of the generated pulses were performed using spectral phase interferometry for direct electric-field reconstruction. A full characterization of the output pulses was studied. The output pulse characterization includes the pulse width, the beam profile, and the spectral bandwidth. Under optimization conditions, the reconstructed pulse intensity autocorrelation function was exposed for the shorts possible pulse duration to achieve transform-limited pulses with energies up to 600µJ. Furthermore, the effect of variation of neon pressure on the pulse-width was studied. The nonlinear SPM found to be increased with the neon pressure. The obtained results may give an opportunity to monitor and control ultrafast transit interaction in femtosecond chemistry.Keywords: femtosecond laser, ultrafast, supercontinuum, ultra-broadband
Procedia PDF Downloads 206581 Shape Management Method of Large Structure Based on Octree Space Partitioning
Authors: Gichun Cha, Changgil Lee, Seunghee Park
Abstract:
The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning
Procedia PDF Downloads 297580 The Urgency of Berth Deepening at the Port of Durban
Authors: Rowen Naicker, Dhiren Allopi
Abstract:
One of the major problems the Port of Durban is experiencing is addressing shallow spots aggravated by megaships that berth. In the recent years, the vessels that call at the Port have increased in size which calls for draughts that are much deeper. For this reason, these larger vessels can only berth under high tide to avoid the risk of running aground. In addition to this, the ships cannot sail in fully laden which does not make it feasible for ship owners. Further during the berthing materials are displaced from the seabed which result in shallow spots being developed. The permitted draft (under-keel allowance) for the Durban Container Terminal (DCT) is currently 12.2 m. Transnet National Ports Authority (TNPA) are currently investing in a dredging fleet worth almost two billion rand. One of the highlights of this investment would be the building of grab hopper dredger that would be dedicated to the Port by 2017. TNPA are trying various techniques to dissolve the reduction of draughts by implementing dredging maintenance projects but is this sufficient? The ideal resolution would be the deepening and widening of the berths. Plans for this project is in place, but the implementation process is a matter of urgency. The intention of this project will be to accommodate three big vessels rather than two which in turn will improve the turnaround time in the port. The berthing will then no longer depend on high tide to avoid ships running aground. The aim of this paper is to prove the implementation of deepening and widening of the Port of Durban is a matter of urgency. If the plan to deepen and widen the berths at DCT is delayed it will mean a loss of business for the South African economy. If larger vessels cannot be accommodated in the Port of Durban, it will bypass the busiest container handling facility in the Southern hemisphere. Shipping companies are compelled to use larger ships as opposed to smaller vessels to lower port and fuel costs. A delay in the expansion of DCT could also result in an escalation of costs.Keywords: DCT, deepening, berth, port
Procedia PDF Downloads 400579 Aeromagnetic Data Interpretation and Source Body Evaluation Using Standard Euler Deconvolution Technique in Obudu Area, Southeastern Nigeria
Authors: Chidiebere C. Agoha, Chukwuebuka N. Onwubuariri, Collins U.amasike, Tochukwu I. Mgbeojedo, Joy O. Njoku, Lawson J. Osaki, Ifeyinwa J. Ofoh, Francis B. Akiang, Dominic N. Anuforo
Abstract:
In order to interpret the airborne magnetic data and evaluate the approximate location, depth, and geometry of the magnetic sources within Obudu area using the standard Euler deconvolution method, very high-resolution aeromagnetic data over the area was acquired, processed digitally and analyzed using Oasis Montaj 8.5 software. Data analysis and enhancement techniques, including reduction to the equator, horizontal derivative, first and second vertical derivatives, upward continuation and regional-residual separation, were carried out for the purpose of detailed data Interpretation. Standard Euler deconvolution for structural indices of 0, 1, 2, and 3 was also carried out and respective maps were obtained using the Euler deconvolution algorithm. Results show that the total magnetic intensity ranges from -122.9nT to 147.0nT, regional intensity varies between -106.9nT to 137.0nT, while residual intensity ranges between -51.5nT to 44.9nT clearly indicating the masking effect of deep-seated structures over surface and shallow subsurface magnetic materials. Results also indicated that the positive residual anomalies have an NE-SW orientation, which coincides with the trend of major geologic structures in the area. Euler deconvolution for all the considered structural indices has depth to magnetic sources ranging from the surface to more than 2000m. Interpretation of the various structural indices revealed the locations and depths of the source bodies and the existence of geologic models, including sills, dykes, pipes, and spherical structures. This area is characterized by intrusive and very shallow basement materials and represents an excellent prospect for solid mineral exploration and development.Keywords: Euler deconvolution, horizontal derivative, Obudu, structural indices
Procedia PDF Downloads 81578 Multidisciplinary Approach for a Tsunami Reconstruction Plan in Coquimbo, Chile
Authors: Ileen Van den Berg, Reinier J. Daals, Chris E. M. Heuberger, Sven P. Hildering, Bob E. Van Maris, Carla M. Smulders, Rafael Aránguiz
Abstract:
Chile is located along the subduction zone of the Nazca plate beneath the South American plate, where large earthquakes and tsunamis have taken place throughout history. The last significant earthquake (Mw 8.2) occurred in September 2015 and generated a destructive tsunami, which mainly affected the city of Coquimbo (71.33°W, 29.96°S). The inundation area consisted of a beach, damaged seawall, damaged railway, wetland and old neighborhood; therefore, local authorities started a reconstruction process immediately after the event. Moreover, a seismic gap has been identified in the same area, and another large event could take place in the near future. The present work proposed an integrated tsunami reconstruction plan for the city of Coquimbo that considered several variables such as safety, nature & recreation, neighborhood welfare, visual obstruction, infrastructure, construction process, and durability & maintenance. Possible future tsunami scenarios are simulated by means of the Non-hydrostatic Evolution of Ocean WAVEs (NEOWAVE) model with 5 nested grids and a higher grid resolution of ~10 m. Based on the score from a multi-criteria analysis, the costs of the alternatives and a preference for a multifunctional solution, the alternative that includes an elevated coastal road with floodgates to reduce tsunami overtopping and control the return flow of a tsunami was selected as the best solution. It was also observed that the wetlands are significantly restored to their former configuration; moreover, the dynamic behavior of the wetlands is stimulated. The numerical simulation showed that the new coastal protection decreases damage and the probability of loss of life by delaying tsunami arrival time. In addition, new evacuation routes and a smaller inundation zone in the city increase safety for the area.Keywords: tsunami, Coquimbo, Chile, reconstruction, numerical simulation
Procedia PDF Downloads 241577 A General Form of Characteristics Method Applied on Minimum Length Nozzles Design
Authors: Merouane Salhi, Mohamed Roudane, Abdelkader Kirad
Abstract:
In this work, we present a new form of characteristics method, which is a technique for solving partial differential equations. Typically, it applies to first-order equations; the aim of this method is to reduce a partial differential equation to a family of ordinary differential equations along which the solution can be integrated from some initial data. This latter developed under the real gas theory, because when the thermal and the caloric imperfections of a gas increases, the specific heat and their ratio do not remain constant anymore and start to vary with the gas parameters. The gas doesn’t stay perfect. Its state equation change and it becomes for a real gas. The presented equations of the characteristics remain valid whatever area or field of study. Here we need have inserted the developed Prandtl Meyer function in the mathematical system to find a new model when the effect of stagnation pressure is taken into account. In this case, the effects of molecular size and intermolecular attraction forces intervene to correct the state equation, the thermodynamic parameters and the value of Prandtl Meyer function. However, with the assumptions that Berthelot’s state equation accounts for molecular size and intermolecular force effects, expressions are developed for analyzing the supersonic flow for thermally and calorically imperfect gas. The supersonic parameters depend directly on the stagnation parameters of the combustion chamber. The resolution has been made by the finite differences method using the corrector predictor algorithm. As results, the developed mathematical model used to design 2D minimum length nozzles under effect of the stagnation parameters of fluid flow. A comparison for air with the perfect gas PG and high temperature models on the one hand and our results by the real gas theory on the other of nozzles shapes and characteristics are made.Keywords: numerical methods, nozzles design, real gas, stagnation parameters, supersonic expansion, the characteristics method
Procedia PDF Downloads 243576 To Include or Not to Include: Resolving Ethical Concerns over the 20% High Quality Cassava Flour Inclusion in Wheat Flour Policy in Nigeria
Authors: Popoola I. Olayinka, Alamu E. Oladeji, B. Maziya-Dixon
Abstract:
Cassava, an indigenous crop grown locally by subsistence farmers in Nigeria has potential to bring economic benefits to the country. Consumption of bread and other confectionaries has been on the rise due to lifestyle changes of Nigerian consumers. However, wheat, being the major ingredient for bread and confectionery production does not thrive well under Nigerian climate hence the huge spending on wheat importation. To reduce spending on wheat importation, the Federal Government of Nigeria intends passing into law mandatory inclusion of 20% high-quality cassava flour (HQCF) in wheat flour. While the proposed policy may reduce post harvest loss of cassava, and also increase food security and domestic agricultural productivity, there are downsides to the policy which include reduction in nutritional quality and low sensory appeal of cassava-wheat bread, reluctance of flour millers to use HQCF, technology and processing challenges among others. The policy thus presents an ethical dilemma which must be resolved for its successful implementation. While inclusion of HQCF to wheat flour in bread and confectionery is a topic that may have been well addressed, resolving the ethical dilemma resulting from the act has not received much attention. This paper attempts to resolve this dilemma using various approaches in food ethics (cost benefits, utilitarianism, deontological and deliberative). The Cost-benefit approach did not provide adequate resolution of the dilemma as all the costs and benefits of the policy could not be stated in the quantitative term. The utilitarianism approach suggests that the policy delivers greatest good to the greatest number while the deontological approach suggests that the act (inclusion of HQCF to wheat flour) is right hence the policy is not utterly wrong. The deliberative approach suggests a win-win situation through deliberation with the parties involved.Keywords: HQCF, ethical dilemma, food security, composite flour, cassava bread
Procedia PDF Downloads 406