Search results for: three dimensional modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5659

Search results for: three dimensional modeling

1159 A Comprehensive Study of Spread Models of Wildland Fires

Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.

Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling

Procedia PDF Downloads 42
1158 Optimizing Groundwater Pumping for a Complex Groundwater/Surface Water System

Authors: Emery A. Coppola Jr., Suna Cinar, Ferenc Szidarovszky

Abstract:

Over-pumping of groundwater resources is a serious problem world-wide. In addition to depleting this valuable resource, hydraulically connected sensitive ecological resources like wetlands and surface water bodies are often impacted and even destroyed by over-pumping. Effectively managing groundwater in a way that satisfy human demand while preserving natural resources is a daunting challenge that will only worsen with growing human populations and climate change. As presented in this paper, a numerical flow model developed for a hypothetical but realistic groundwater/surface water system was combined with formal optimization. Response coefficients were used in an optimization management model to maximize groundwater pumping in a complex, multi-layered aquifer system while protecting against groundwater over-draft, streamflow depletion, and wetland impacts. Pumping optimization was performed for different constraint sets that reflect different resource protection preferences, yielding significantly different optimal pumping solutions. A sensitivity analysis on the optimal solutions was performed on select response coefficients to identify differences between wet and dry periods. Stochastic optimization was also performed, where uncertainty associated with changing irrigation demand due to changing weather conditions are accounted for. One of the strengths of this optimization approach is that it can efficiently and accurately identify superior management strategies that minimize risk and adverse environmental impacts associated with groundwater pumping under different hydrologic conditions.

Keywords: numerical groundwater flow modeling, water management optimization, groundwater overdraft, streamflow depletion

Procedia PDF Downloads 208
1157 Processing and Modeling of High-Resolution Geophysical Data for Archaeological Prospection, Nuri Area, Northern Sudan

Authors: M. Ibrahim Ali, M. El Dawi, M. A. Mohamed Ali

Abstract:

In this study, the use of magnetic gradient survey, and the geoelectrical ground methods used together to explore archaeological features in Nuri’s pyramids area. Research methods used and the procedures and methodologies have taken full right during the study. The magnetic survey method was used to search for archaeological features using (Geoscan Fluxgate Gradiometer (FM36)). The study area was divided into a number of squares (networks) exactly equal (20 * 20 meters). These squares were collected at the end of the study to give a major network for each region. Networks also divided to take the sample using nets typically equal to (0.25 * 0.50 meter), in order to give a more specific archaeological features with some small bipolar anomalies that caused by buildings built from fired bricks. This definition is important to monitor many of the archaeological features such as rooms and others. This main network gives us an integrated map displayed for easy presentation, and it also allows for all the operations required using (Geoscan Geoplot software). The parallel traverse is the main way to take readings of the magnetic survey, to get out the high-quality data. The study area is very rich in old buildings that vary from small to very large. According to the proportion of the sand dunes and the loose soil, most of these buildings are not visible from the surface. Because of the proportion of the sandy dry soil, there is no connection between the ground surface and the electrodes. We tried to get electrical readings by adding salty water to the soil, but, unfortunately, we failed to confirm the magnetic readings with electrical readings as previously planned.

Keywords: archaeological features, independent grids, magnetic gradient, Nuri pyramid

Procedia PDF Downloads 456
1156 The Mediating Role of Psychological Factors in the Relationships Between Youth Problematic Internet and Subjective Well-Being

Authors: Dorit Olenik-Shemesh, Tali Heiman

Abstract:

The rapid increase in the massive use of the internet in recent yearshas led to an increase in the prevalence of a phenomenon called 'Problematic Internet use' (PIU), an emerging, growing health problem, especially during adolescents, that poses a challenge for mental health research and practitioners. Problematic Internet use (PIU) is defined as an excessive overuse of the internet, including an inability to control time spent on the internet, cognitivepreoccupation with the Internet, and continued use in spite of the adverse consequences, which may lead to psychological, social, and academic difficulties in one's life and daily functioning. However, little is known about the nature of the nexusbetween PIU and subjective well-being among adolescents. The main purpose of the current study was to explore in depth the network of connections between PIU, sense of well-being, and fourpersonal-emotional factors (resilience, self-control, depressive mood, and loneliness) that may mediate these relationships. A total sample of 433 adolescents, 214 (49.4%) girls and 219 (50.6%) boys between the ages of 12–17 (mean = 14.9, SD = 2.16), completed self-reportquestionnaires relating to the study variables. In line with the hypothesis, analysis of a Structural Equation modeling (SEM) revealed the main following results: high levels of PIU predicted low levels of well-being among adolescents. In addition, low levels of resilience and high levels of depressivemood (together), as well as low levels of self control and high levels of depressivemood (together), as well as low levels of resilience and high levels of loneliness, mediated the relationships between PIU and well-being. In general, girls were found to be higher in PIU and inresilience than boys. The study results revealed specific implications for developing intervention programs for adolescents in the context of PIU; aiming at more balanced adjusted use of the Internet along withpreventingthe decrease in well being.

Keywords: probelmatic inetrent Use, well-being, adolescents, SEM model

Procedia PDF Downloads 144
1155 Factors Affecting Internet Behavior and Life Satisfaction of Older Adult Learners with Use of Smartphone

Authors: Horng-Ji Lai

Abstract:

The intuitive design features and friendly interface of smartphone attract older adults. In Taiwan, many senior education institutes offer smartphone training courses for older adult learners who are interested in learning this innovative technology. It is expected that the training courses can help them to enjoy the benefits of using smartphone and increase their life satisfaction. Therefore, it is important to investigate the factors that influence older adults’ behavior of using smartphone. The purpose of the research was to develop and test a research model that investigates the factors (self-efficacy, social connection, the need to seek health information, and the need to seek financial information) affecting older adult learners’ Internet behaviour and their life satisfaction with use of smartphone. Also, this research sought to identify the relationship between the proposed variables. Survey method was used to collect research data. A Structural Equation Modeling was performed using Partial Least Squares (PLS) regression for data exploration and model estimation. The participants were 394 older adult learners from smartphone training courses in active aging learning centers located in central Taiwan. The research results revealed that self-efficacy significantly affected older adult learner’ social connection, the need to seek health information, and the need to seek financial information. The construct of social connection yielded a positive influence in respondents’ life satisfaction. The implications of these results for practice and future research are also discussed.

Keywords: older adults, smartphone, internet behaviour, life satisfaction

Procedia PDF Downloads 159
1154 Design and Radio Frequency Characterization of Radial Reentrant Narrow Gap Cavity for the Inductive Output Tube

Authors: Meenu Kaushik, Ayon K. Bandhoyadhayay, Lalit M. Joshi

Abstract:

Inductive output tubes (IOTs) are widely used as microwave power amplifiers for broadcast and scientific applications. It is capable of amplifying radio frequency (RF) power with very good efficiency. Its compactness, reliability, high efficiency, high linearity and low operating cost make this device suitable for various applications. The device consists of an integrated structure of electron gun and RF cavity, collector and focusing structure. The working principle of IOT is a combination of triode and klystron. The cathode lies in the electron gun produces a stream of electrons. A control grid is placed in close proximity to the cathode. Basically, the input part of IOT is the integrated structure of gridded electron gun which acts as an input cavity thereby providing the interaction gap where the input RF signal is applied to make it interact with the produced electron beam for supporting the amplification phenomena. The paper presents the design, fabrication and testing of a radial re-entrant cavity for implementing in the input structure of IOT at 350 MHz operating frequency. The model’s suitability has been discussed and a generalized mathematical relation has been introduced for getting the proper transverse magnetic (TM) resonating mode in the radial narrow gap RF cavities. The structural modeling has been carried out in CST and SUPERFISH codes. The cavity is fabricated with the Aluminum material and the RF characterization is done using vector network analyzer (VNA) and the results are presented for the resonant frequency peaks obtained in VNA.

Keywords: inductive output tubes, IOT, radial cavity, coaxial cavity, particle accelerators

Procedia PDF Downloads 92
1153 Engineering a Tumor Extracellular Matrix Towards an in vivo Mimicking 3D Tumor Microenvironment

Authors: Anna Cameron, Chunxia Zhao, Haofei Wang, Yun Liu, Guang Ze Yang

Abstract:

Since the first publication in 1775, cancer research has built a comprehensive understanding of how cellular components of the tumor niche promote disease development. However, only within the last decade has research begun to establish the impact of non-cellular components of the niche, particularly the extracellular matrix (ECM). The ECM, a three-dimensional scaffold that sustains the tumor microenvironment, plays a crucial role in disease progression. Cancer cells actively deregulate and remodel the ECM to establish a tumor-promoting environment. Recent work has highlighted the need to further our understanding of the complexity of this cancer-ECM relationship. In vitro models use hydrogels to mimic the ECM, as hydrogel matrices offer biological compatibility and stability needed for long term cell culture. However, natural hydrogels are being used in these models verbatim, without tuning their biophysical characteristics to achieve pathophysiological relevance, thus limiting their broad use within cancer research. The biophysical attributes of these gels dictate cancer cell proliferation, invasion, metastasis, and therapeutic response. Evaluating the three most widely used natural hydrogels, Matrigel, collagen, and agarose gel, the permeability, stiffness, and pore-size of each gel were measured and compared to the in vivo environment. The pore size of all three gels fell between 0.5-6 µm, which coincides with the 0.1-5 µm in vivo pore size found in the literature. However, the stiffness for hydrogels able to support cell culture ranged between 0.05 and 0.3 kPa, which falls outside the range of 0.3-20,000 kPa reported in the literature for an in vivo ECM. Permeability was ~100x greater than in vivo measurements, due in large part to the lack of cellular components which impede permeation. Though, these measurements prove important when assessing therapeutic particle delivery, as the ECM permeability decreased with increasing particle size, with 100 nm particles exhibiting a fifth of the permeability of 10 nm particles. This work explores ways of adjusting the biophysical characteristics of hydrogels by changing protein concentration and the trade-off, which occurs due to the interdependence of these factors. The global aim of this work is to produce a more pathophysiologically relevant model for each tumor type.

Keywords: cancer, extracellular matrix, hydrogel, microfluidic

Procedia PDF Downloads 68
1152 Engineering the Topological Insulator Structures for Terahertz Detectors

Authors: M. Marchewka

Abstract:

The article is devoted to the possible optical transitions in double quantum wells system based on HgTe/HgCd(Mn)Te heterostructures. Such structures can find applications as detectors and sources of radiation in the terahertz range. The Double Quantum Wells (DQW) systems consist of two QWs separated by the transparent for electrons barrier. Such systems look promising from the point of view of the additional degrees of freedom. In the case of the topological insulator in about 6.4nm wide HgTe QW or strained 3D HgTe films at the interfaces, the topologically protected surface states appear at the interfaces/surfaces. Electrons in those edge states move along the interfaces/surfaces without backscattering due to time-reversal symmetry. Combination of the topological properties, which was already verified by the experimental way, together with the very well know properties of the DQWs, can be very interesting from the applications point of view, especially in the THz area. It is important that at the present stage, the technology makes it possible to create high-quality structures of this type, and intensive experimental and theoretical studies of their properties are already underway. The idea presented in this paper is based on the eight-band KP model, including the additional terms related to the structural inversion asymmetry, interfaces inversion asymmetry, the influence of the magnetically content, and the uniaxial strain describe the full pictures of the possible real structure. All of this term, together with the external electric field, can be sources of breaking symmetry in investigated materials. Using the 8 band KP model, we investigated the electronic shape structure with and without magnetic field from the application point of view as a THz detector in a small magnetic field (below 2T). We believe that such structures are the way to get the tunable topological insulators and the multilayer topological insulator. Using the one-dimensional electrons at the topologically protected interface states as fast and collision-free signal carriers as charge and signal carriers, the detection of the optical signal should be fast, which is very important in the high-resolution detection of signals in the THz range. The proposed engineering of the investigated structures is now one of the important steps on the way to get the proper structures with predicted properties.

Keywords: topological insulator, THz spectroscopy, KP model, II-VI compounds

Procedia PDF Downloads 101
1151 Development of Electrochemical Biosensor Based on Dendrimer-Magnetic Nanoparticles for Detection of Alpha-Fetoprotein

Authors: Priyal Chikhaliwala, Sudeshna Chandra

Abstract:

Liver cancer is one of the most common malignant tumors with poor prognosis. This is because liver cancer does not exhibit any symptoms in early stage of disease. Increased serum level of AFP is clinically considered as a diagnostic marker for liver malignancy. The present diagnostic modalities include various types of immunoassays, radiological studies, and biopsy. However, these tests undergo slow response times, require significant sample volumes, achieve limited sensitivity and ultimately become expensive and burdensome to patients. Considering all these aspects, electrochemical biosensors based on dendrimer-magnetic nanoparticles (MNPs) was designed. Dendrimers are novel nano-sized, three-dimensional molecules with monodispersed structures. Poly-amidoamine (PAMAM) dendrimers with eight –NH₂ groups using ethylenediamine as a core molecule were synthesized using Michael addition reaction. Dendrimers provide added the advantage of not only stabilizing Fe₃O₄ NPs but also displays capability of performing multiple electron redox events and binding multiple biological ligands to its dendritic end-surface. Fe₃O₄ NPs due to its superparamagnetic behavior can be exploited for magneto-separation process. Fe₃O₄ NPs were stabilized with PAMAM dendrimer by in situ co-precipitation method. The surface coating was examined by FT-IR, XRD, VSM, and TGA analysis. Electrochemical behavior and kinetic studies were evaluated using CV which revealed that the dendrimer-Fe₃O₄ NPs can be looked upon as electrochemically active materials. Electrochemical immunosensor was designed by immobilizing anti-AFP onto dendrimer-MNPs by gluteraldehyde conjugation reaction. The bioconjugates were then incubated with AFP antigen. The immunosensor was characterized electrochemically indicating successful immuno-binding events. The binding events were also further studied using magnetic particle imaging (MPI) which is a novel imaging modality in which Fe₃O₄ NPs are used as tracer molecules with positive contrast. Multicolor MPI was able to clearly localize AFP antigen and antibody and its binding successfully. Results demonstrate immense potential in terms of biosensing and enabling MPI of AFP in clinical diagnosis.

Keywords: alpha-fetoprotein, dendrimers, electrochemical biosensors, magnetic nanoparticles

Procedia PDF Downloads 119
1150 Dispersion Effects in Waves Reflected by Lossy Conductors: The Optics vs. Electromagnetics Approach

Authors: Oibar Martinez, Clara Oliver, Jose Miguel Miranda

Abstract:

The study of dispersion phenomena in electromagnetic waves reflected by conductors at infrared and lower frequencies is a topic which finds a number of applications. We aim to explain in this work what are the most relevant ones and how this phenomenon is modeled from both optics and electromagnetics points of view. We also explain here how the amplitude of an electromagnetic wave reflected by a lossy conductor could depend on both the frequency of the incident wave, as well as on the electrical properties of the conductor, and we illustrate this phenomenon with a practical example. The mathematical analysis made by a specialist in electromagnetics or a microwave engineer is apparently very different from the one made by a specialist in optics. We show here how both approaches lead to the same physical result and what are the key concepts which enable one to understand that despite the differences in the equations the solution to the problem happens to be the same. Our study starts with an analysis made by using the complex refractive index and the reflectance parameter. We show how this reflectance has a dependence with the square root of the frequency when the reflecting material is a good conductor, and the frequency of the wave is low enough. Then we analyze the same problem with a less known approach, which is based on the reflection coefficient of the electric field, a parameter that is most commonly used in electromagnetics and microwave engineering. In summary, this paper presents a mathematical study illustrated with a worked example which unifies the modeling of dispersion effects made by specialists in optics and the one made by specialists in electromagnetics. The main finding of this work is that it is possible to reproduce the dependence of the Fresnel reflectance with frequency from the intrinsic impedance of the reflecting media.

Keywords: dispersion, electromagnetic waves, microwaves, optics

Procedia PDF Downloads 101
1149 Technical and Economic Analysis of Smart Micro-Grid Renewable Energy Systems: An Applicable Case Study

Authors: M. A. Fouad, M. A. Badr, Z. S. Abd El-Rehim, Taher Halawa, Mahmoud Bayoumi, M. M. Ibrahim

Abstract:

Renewable energy-based micro-grids are presently attracting significant consideration. The smart grid system is presently considered a reliable solution for the expected deficiency in the power required from future power systems. The purpose of this study is to determine the optimal components sizes of a micro-grid, investigating technical and economic performance with the environmental impacts. The micro grid load is divided into two small factories with electricity, both on-grid and off-grid modes are considered. The micro-grid includes photovoltaic cells, back-up diesel generator wind turbines, and battery bank. The estimated load pattern is 76 kW peak. The system is modeled and simulated by MATLAB/Simulink tool to identify the technical issues based on renewable power generation units. To evaluate system economy, two criteria are used: the net present cost and the cost of generated electricity. The most feasible system components for the selected application are obtained, based on required parameters, using HOMER simulation package. The results showed that a Wind/Photovoltaic (W/PV) on-grid system is more economical than a Wind/Photovoltaic/Diesel/Battery (W/PV/D/B) off-grid system as the cost of generated electricity (COE) is 0.266 $/kWh and 0.316 $/kWh, respectively. Considering the cost of carbon dioxide emissions, the off-grid will be competitive to the on-grid system as COE is found to be (0.256 $/kWh, 0.266 $/kWh), for on and off grid systems.

Keywords: renewable energy sources, micro-grid system, modeling and simulation, on/off grid system, environmental impacts

Procedia PDF Downloads 242
1148 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing

Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou

Abstract:

The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.

Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation

Procedia PDF Downloads 83
1147 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios

Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu

Abstract:

Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.

Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method

Procedia PDF Downloads 124
1146 Development of a Real-Time Simulink Based Robotic System to Study Force Feedback Mechanism during Instrument-Object Interaction

Authors: Jaydip M. Desai, Antonio Valdevit, Arthur Ritter

Abstract:

Robotic surgery is used to enhance minimally invasive surgical procedure. It provides greater degree of freedom for surgical tools but lacks of haptic feedback system to provide sense of touch to the surgeon. Surgical robots work on master-slave operation, where user is a master and robotic arms are the slaves. Current, surgical robots provide precise control of the surgical tools, but heavily rely on visual feedback, which sometimes cause damage to the inner organs. The goal of this research was to design and develop a real-time simulink based robotic system to study force feedback mechanism during instrument-object interaction. Setup includes three Velmex XSlide assembly (XYZ Stage) for three dimensional movement, an end effector assembly for forceps, electronic circuit for four strain gages, two Novint Falcon 3D gaming controllers, microcontroller board with linear actuators, MATLAB and Simulink toolboxes. Strain gages were calibrated using Imada Digital Force Gauge device and tested with a hard-core wire to measure instrument-object interaction in the range of 0-35N. Designed simulink model successfully acquires 3D coordinates from two Novint Falcon controllers and transfer coordinates to the XYZ stage and forceps. Simulink model also reads strain gages signal through 10-bit analog to digital converter resolution of a microcontroller assembly in real time, converts voltage into force and feedback the output signals to the Novint Falcon controller for force feedback mechanism. Experimental setup allows user to change forward kinematics algorithms to achieve the best-desired movement of the XYZ stage and forceps. This project combines haptic technology with surgical robot to provide sense of touch to the user controlling forceps through machine-computer interface.

Keywords: surgical robot, haptic feedback, MATLAB, strain gage, simulink

Procedia PDF Downloads 510
1145 The Improvement of Turbulent Heat Flux Parameterizations in Tropical GCMs Simulations Using Low Wind Speed Excess Resistance Parameter

Authors: M. O. Adeniyi, R. T. Akinnubi

Abstract:

The parameterization of turbulent heat fluxes is needed for modeling land-atmosphere interactions in Global Climate Models (GCMs). However, current GCMs still have difficulties with producing reliable turbulent heat fluxes for humid tropical regions, which may be due to inadequate parameterization of the roughness lengths for momentum (z0m) and heat (z0h) transfer. These roughness lengths are usually expressed in term of excess resistance factor (κB^(-1)), and this factor is used to account for different resistances for momentum and heat transfers. In this paper, a more appropriate excess resistance factor (〖 κB〗^(-1)) suitable for low wind speed condition was developed and incorporated into the aerodynamic resistance approach (ARA) in the GCMs. Also, the performance of various standard GCMs κB^(-1) schemes developed for high wind speed conditions were assessed. Based on the in-situ surface heat fluxes and profile measurements of wind speed and temperature from Nigeria Micrometeorological Experimental site (NIMEX), new κB^(-1) was derived through application of the Monin–Obukhov similarity theory and Brutsaert theoretical model for heat transfer. Turbulent flux parameterizations with this new formula provides better estimates of heat fluxes when compared with others estimated using existing GCMs κB^(-1) schemes. The derived κB^(-1) MBE and RMSE in the parameterized QH ranged from -1.15 to – 5.10 Wm-2 and 10.01 to 23.47 Wm-2, while that of QE ranged from - 8.02 to 6.11 Wm-2 and 14.01 to 18.11 Wm-2 respectively. The derived 〖 κB〗^(-1) gave better estimates of QH than QE during daytime. The derived 〖 κB〗^(-1)=6.66〖 Re〗_*^0.02-5.47, where Re_* is the Reynolds number. The derived κB^(-1) scheme which corrects a well documented large overestimation of turbulent heat fluxes is therefore, recommended for most regional models within the tropic where low wind speed is prevalent.

Keywords: humid, tropic, excess resistance factor, overestimation, turbulent heat fluxes

Procedia PDF Downloads 171
1144 On the Added Value of Probabilistic Forecasts Applied to the Optimal Scheduling of a PV Power Plant with Batteries in French Guiana

Authors: Rafael Alvarenga, Hubert Herbaux, Laurent Linguet

Abstract:

The uncertainty concerning the power production of intermittent renewable energy is one of the main barriers to the integration of such assets into the power grid. Efforts have thus been made to develop methods to quantify this uncertainty, allowing producers to ensure more reliable and profitable engagements related to their future power delivery. Even though a diversity of probabilistic approaches was proposed in the literature giving promising results, the added value of adopting such methods for scheduling intermittent power plants is still unclear. In this study, the profits obtained by a decision-making model used to optimally schedule an existing PV power plant connected to batteries are compared when the model is fed with deterministic and probabilistic forecasts generated with two of the most recent methods proposed in the literature. Moreover, deterministic forecasts with different accuracy levels were used in the experiments, testing the utility and the capability of probabilistic methods of modeling the progressively increasing uncertainty. Even though probabilistic approaches are unquestionably developed in the recent literature, the results obtained through a study case show that deterministic forecasts still provide the best performance if accurate, ensuring a gain of 14% on final profits compared to the average performance of probabilistic models conditioned to the same forecasts. When the accuracy of deterministic forecasts progressively decreases, probabilistic approaches start to become competitive options until they completely outperform deterministic forecasts when these are very inaccurate, generating 73% more profits in the case considered compared to the deterministic approach.

Keywords: PV power forecasting, uncertainty quantification, optimal scheduling, power systems

Procedia PDF Downloads 48
1143 A Modular and Reusable Bond Graph Model of Epithelial Transport in the Proximal Convoluted Tubule

Authors: Leyla Noroozbabaee, David Nickerson

Abstract:

We introduce a modular, consistent, reusable bond graph model of the renal nephron’s proximal convoluted tubule (PCT), which can reproduce biological behaviour. In this work, we focus on ion and volume transport in the proximal convoluted tubule of the renal nephron. Modelling complex systems requires complex modelling problems to be broken down into manageable pieces. This can be enabled by developing models of subsystems that are subsequently coupled hierarchically. Because they are based on a graph structure. In the current work, we define two modular subsystems: the resistive module representing the membrane and the capacitive module representing solution compartments. Each module is analyzed based on thermodynamic processes, and all the subsystems are reintegrated into circuit theory in network thermodynamics. The epithelial transport system we introduce in the current study consists of five transport membranes and four solution compartments. Coupled dissipations in the system occur in the membrane subsystems and coupled free-energy increasing, or decreasing processes appear in solution compartment subsystems. These structural subsystems also consist of elementary thermodynamic processes: dissipations, free-energy change, and power conversions. We provide free and open access to the Python implementation to ensure our model is accessible, enabling the reader to explore the model through setting their simulations and reproducibility tests.

Keywords: Bond Graph, Epithelial Transport, Water Transport, Mathematical Modeling

Procedia PDF Downloads 59
1142 Numerical Modelling of Immiscible Fluids Flow in Oil Reservoir Rocks during Enhanced Oil Recovery Processes

Authors: Zahreddine Hafsi, Manoranjan Mishra , Sami Elaoud

Abstract:

Ensuring the maximum recovery rate of oil from reservoir rocks is a challenging task that requires preliminary numerical analysis of different techniques used to enhance the recovery process. After conventional oil recovery processes and in order to retrieve oil left behind after the primary recovery phase, water flooding in one of several techniques used for enhanced oil recovery (EOR). In this research work, EOR via water flooding is numerically modeled, and hydrodynamic instabilities resulted from immiscible oil-water flow in reservoir rocks are investigated. An oil reservoir is a porous medium consisted of many fractures of tiny dimensions. For modeling purposes, the oil reservoir is considered as a collection of capillary tubes which provides useful insights into how fluids behave in the reservoir pore spaces. Equations governing oil-water flow in oil reservoir rocks are developed and numerically solved following a finite element scheme. Numerical results are obtained using Comsol Multiphysics software. The two phase Darcy module of COMSOL Multiphysics allows modelling the imbibition process by the injection of water (as wetting phase) into an oil reservoir. Van Genuchten, Brooks Corey and Levrett models were considered as retention models and obtained flow configurations are compared, and the governing parameters are discussed. For the considered retention models it was found that onset of instabilities viz. fingering phenomenon is highly dependent on the capillary pressure as well as the boundary conditions, i.e., the inlet pressure and the injection velocity.

Keywords: capillary pressure, EOR process, immiscible flow, numerical modelling

Procedia PDF Downloads 107
1141 Aeroacoustics Investigations of Unsteady 3D Airfoil for Different Angle Using Computational Fluid Dynamics Software

Authors: Haydar Kepekçi, Baha Zafer, Hasan Rıza Güven

Abstract:

Noise disturbance is one of the major factors considered in the fast development of aircraft technology. This paper reviews the flow field, which is examined on the 2D NACA0015 and 3D NACA0012 blade profile using SST k-ω turbulence model to compute the unsteady flow field. We inserted the time-dependent flow area variables in Ffowcs-Williams and Hawkings (FW-H) equations as an input and Sound Pressure Level (SPL) values will be computed for different angles of attack (AoA) from the microphone which is positioned in the computational domain to investigate effect of augmentation of unsteady 2D and 3D airfoil region noise level. The computed results will be compared with experimental data which are available in the open literature. As results; one of the calculated Cp is slightly lower than the experimental value. This difference could be due to the higher Reynolds number of the experimental data. The ANSYS Fluent software was used in this study. Fluent includes well-validated physical modeling capabilities to deliver fast, accurate results across the widest range of CFD and multiphysics applications. This paper includes a study which is on external flow over an airfoil. The case of 2D NACA0015 has approximately 7 million elements and solves compressible fluid flow with heat transfer using the SST turbulence model. The other case of 3D NACA0012 has approximately 3 million elements.

Keywords: 3D blade profile, noise disturbance, aeroacoustics, Ffowcs-Williams and Hawkings (FW-H) equations, k-ω-SST turbulence model

Procedia PDF Downloads 184
1140 Avoiding Gas Hydrate Problems in Qatar Oil and Gas Industry: Environmentally Friendly Solvents for Gas Hydrate Inhibition

Authors: Nabila Mohamed, Santiago Aparicio, Bahman Tohidi, Mert Atilhan

Abstract:

Qatar's one of the biggest problem in processing its natural resource, which is natural gas, is the often occurring blockage in the pipelines caused due to uncontrolled gas hydrate formation in the pipelines. Several millions of dollars are being spent at the process site to dehydrate the blockage safely by using chemical inhibitors. We aim to establish national database, which addresses the physical conditions that promotes Qatari natural gas to form gas hydrates in the pipelines. Moreover, we aim to design and test novel hydrate inhibitors that are suitable for Qatari natural gas and its processing facilities. From these perspectives we are aiming to provide more effective and sustainable reservoir utilization and processing of Qatari natural gas. In this work, we present the initial findings of a QNRF funded project, which deals with the natural gas hydrate formation characteristics of Qatari type gas in both experimental (PVTx) and computational (molecular simulations) methods. We present the data from the two fully automated apparatus: a gas hydrate autoclave and a rocking cell. Hydrate equilibrium curves including growth/dissociation conditions for multi-component systems for several gas mixtures that represent Qatari type natural gas with and without the presence of well known kinetic and thermodynamic hydrate inhibitors. Ionic liquids were designed and used for testing their inhibition performance and their DFT and molecular modeling simulation results were also obtained and compared with the experimental results. Results showed significant performance of ionic liquids with up to 0.5 % in volume with up to 2 to 4 0C inhibition at high pressures.

Keywords: gas hydrates, natural gas, ionic liquids, inhibition, thermodynamic inhibitors, kinetic inhibitors

Procedia PDF Downloads 1285
1139 Third Party Logistics (3PL) Selection Criteria for an Indian Heavy Industry Using SEM

Authors: Nadama Kumar, P. Parthiban, T. Niranjan

Abstract:

In the present paper, we propose an incorporated approach for 3PL supplier choice that suits the distinctive strategic needs of the outsourcing organization in southern part of India. Four fundamental criteria have been used in particular Performance, IT, Service and Intangible. These are additionally subdivided into fifteen sub-criteria. The proposed strategy coordinates Structural Equation Modeling (SEM) and Non-additive Fuzzy Integral strategies. The presentation of fluffiness manages the unclearness of human judgments. The SEM approach has been used to approve the determination criteria for the proposed show though the Non-additive Fuzzy Integral approach uses the SEM display contribution to assess a supplier choice score. The case organization has a exclusive vertically integrated assembly that comprises of several companies focusing on a slight array of the value chain. To confirm manufacturing and logistics proficiency, it significantly relies on 3PL suppliers to attain supply chain superiority. However, 3PL supplier selection is an intricate decision-making procedure relating multiple selection criteria. The goal of this work is to recognize the crucial 3PL selection criteria by using the non-additive fuzzy integral approach. Unlike the outmoded multi criterion decision-making (MCDM) methods which frequently undertake independence among criteria and additive importance weights, the nonadditive fuzzy integral is an effective method to resolve the dependency among criteria, vague information, and vital fuzziness of human judgment. In this work, we validate an empirical case that engages the nonadditive fuzzy integral to assess the importance weight of selection criteria and indicate the most suitable 3PL supplier.

Keywords: 3PL, non-additive fuzzy integral approach, SEM, fuzzy

Procedia PDF Downloads 255
1138 Adding a Few Language-Level Constructs to Improve OOP Verifiability of Semantic Correctness

Authors: Lian Yang

Abstract:

Object-oriented programming (OOP) is the dominant programming paradigm in today’s software industry and it has literally enabled average software developers to develop millions of commercial strength software applications in the era of INTERNET revolution over the past three decades. On the other hand, the lack of strict mathematical model and domain constraint features at the language level has long perplexed the computer science academia and OOP engineering community. This situation resulted in inconsistent system qualities and hard-to-understand designs in some OOP projects. The difficulties with regards to fix the current situation are also well known. Although the power of OOP lies in its unbridled flexibility and enormously rich data modeling capability, we argue that the ambiguity and the implicit facade surrounding the conceptual model of a class and an object should be eliminated as much as possible. We listed the five major usage of class and propose to separate them by proposing new language constructs. By using well-established theories of set and FSM, we propose to apply certain simple, generic, and yet effective constraints at OOP language level in an attempt to find a possible solution to the above-mentioned issues regarding OOP. The goal is to make OOP more theoretically sound as well as to aid programmers uncover warning signs of irregularities and domain-specific issues in applications early on the development stage and catch semantic mistakes at runtime, improving correctness verifiability of software programs. On the other hand, the aim of this paper is more practical than theoretical.

Keywords: new language constructs, set theory, FSM theory, user defined value type, function groups, membership qualification attribute (MQA), check-constraint (CC)

Procedia PDF Downloads 220
1137 Beyond the “Breakdown” of Karman Vortex Street

Authors: Ajith Kumar S., Sankaran Namboothiri, Sankrish J., SarathKumar S., S. Anil Lal

Abstract:

A numerical analysis of flow over a heated circular cylinder is done in this paper. The governing equations, Navier-Stokes, and energy equation within the Boussinesq approximation along with continuity equation are solved using hybrid FEM-FVM technique. The density gradient created due to the heating of the cylinder will induce buoyancy force, opposite to the direction of action of acceleration due to gravity, g. In the present work, the flow direction and the direction of buoyancy force are taken as same (vertical flow configuration), so that the buoyancy force accelerates the mean flow past the cylinder. The relative dominance of the buoyancy force over the inertia force is characterized by the Richardson number (Ri), which is one of the parameter that governs the flow dynamics and heat transfer in this analysis. It is well known that above a certain value of Reynolds number, Re (ratio of inertia force over the viscous forces), the unsteady Von Karman vortices can be seen shedding behind the cylinder. The shedding wake patterns could be seriously altered by heating/cooling the cylinder. The non-dimensional shedding frequency called the Strouhal number is found to be increasing as Ri increases. The aerodynamic force coefficients CL and CD are observed to change its value. In the present vertical configuration of flow over the cylinder, as Ri increases, shedding frequency gets increased and suddenly drops down to zero at a critical value of Richardson number. The unsteady vortices turn to steady standing recirculation bubbles behind the cylinder after this critical Richardson number. This phenomenon is well known in literature as "Breakdown of the Karman Vortex Street". It is interesting to see the flow structures on further increase in the Richardson number. On further heating of the cylinder surface, the size of the recirculation bubble decreases without loosing its symmetry about the horizontal axis passing through the center of the cylinder. The separation angle is found to be decreasing with Ri. Finally, we observed a second critical Richardson number, after which the the flow will be attached to the cylinder surface without any wake behind it. The flow structures will be symmetrical not only about the horizontal axis, but also with the vertical axis passing through the center of the cylinder. At this stage, there will be a "single plume" emanating from the rear stagnation point of the cylinder. We also observed the transition of the plume is a strong function of the Richardson number.

Keywords: drag reduction, flow over circular cylinder, flow control, mixed convection flow, vortex shedding, vortex breakdown

Procedia PDF Downloads 379
1136 Prediction Modeling of Alzheimer’s Disease and Its Prodromal Stages from Multimodal Data with Missing Values

Authors: M. Aghili, S. Tabarestani, C. Freytes, M. Shojaie, M. Cabrerizo, A. Barreto, N. Rishe, R. E. Curiel, D. Loewenstein, R. Duara, M. Adjouadi

Abstract:

A major challenge in medical studies, especially those that are longitudinal, is the problem of missing measurements which hinders the effective application of many machine learning algorithms. Furthermore, recent Alzheimer's Disease studies have focused on the delineation of Early Mild Cognitive Impairment (EMCI) and Late Mild Cognitive Impairment (LMCI) from cognitively normal controls (CN) which is essential for developing effective and early treatment methods. To address the aforementioned challenges, this paper explores the potential of using the eXtreme Gradient Boosting (XGBoost) algorithm in handling missing values in multiclass classification. We seek a generalized classification scheme where all prodromal stages of the disease are considered simultaneously in the classification and decision-making processes. Given the large number of subjects (1631) included in this study and in the presence of almost 28% missing values, we investigated the performance of XGBoost on the classification of the four classes of AD, NC, EMCI, and LMCI. Using 10-fold cross validation technique, XGBoost is shown to outperform other state-of-the-art classification algorithms by 3% in terms of accuracy and F-score. Our model achieved an accuracy of 80.52%, a precision of 80.62% and recall of 80.51%, supporting the more natural and promising multiclass classification.

Keywords: eXtreme gradient boosting, missing data, Alzheimer disease, early mild cognitive impairment, late mild cognitive impair, multiclass classification, ADNI, support vector machine, random forest

Procedia PDF Downloads 161
1135 Effect of Surface Treatments on the Cohesive Response of Nylon 6/silica Interfaces

Authors: S. Arabnejad, D. W. C. Cheong, H. Chaobin, V. P. W. Shim

Abstract:

Debonding is the one of the fundamental damage mechanisms in particle field composites. This phenomenon gains more importance in nano composites because of the extensive interfacial region present in these materials. Understanding the debonding mechanism accurately, can help in understanding and predicting the response of nano composites as the interface deteriorates. The small length scale of the phenomenon makes the experimental characterization complicated and the results of it, far from real physical behavior. In this study the damage process in nylon-6/silica interface is examined through Molecular Dynamics (MD) modeling and simulations. The silica has been modeled with three forms of surfaces – without any surface treatment, with the surface treatment of 3-aminopropyltriethoxysilane (APTES) and with Hexamethyldisilazane (HMDZ) surface treatment. The APTES surface modification used to create functional groups on the silica surface, reacts and form covalent bonds with nylon 6 chains while the HMDZ surface treatment only interacts with both particle and polymer by non-bond interaction. The MD model in this study uses a PCFF force field. The atomic model is generated in a periodic box with a layer of vacuum on top of the polymer layer. This layer of vacuum is large enough that assures us from not having any interaction between particle and substrate after debonding. Results show that each of these three models show a different traction separation behavior. However, all of them show an almost bilinear traction separation behavior. The study also reveals a strong correlation between the length of APTES surface treatment and the cohesive strength of the interface.

Keywords: debonding, surface treatment, cohesive response, separation behaviour

Procedia PDF Downloads 435
1134 Evaluation of Newly Synthesized Steroid Derivatives Using In silico Molecular Descriptors and Chemometric Techniques

Authors: Milica Ž. Karadžić, Lidija R. Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Z. Kovačević, Anamarija I. Mandić, Katarina Penov-Gaši, Andrea R. Nikolić, Aleksandar M. Oklješa

Abstract:

This study considered selection of the in silico molecular descriptors and the models for newly synthesized steroid derivatives description and their characterization using chemometric techniques. Multiple linear regression (MLR) models were established and gave the best molecular descriptors for quantitative structure-retention relationship (QSRR) modeling of the retention of the investigated molecules. MLR models were without multicollinearity among the selected molecular descriptors according to the variance inflation factor (VIF) values. Used molecular descriptors were ranked using generalized pair correlation method (GPCM). In this method, the significant difference between independent variables can be noticed regardless almost equal correlation between dependent variable. Generated MLR models were statistically and cross-validated and the best models were kept. Models were ranked using sum of ranking differences (SRD) method. According to this method, the most consistent QSRR model can be found and similarity or dissimilarity between the models could be noticed. In this study, SRD was performed using average values of experimentally observed data as a golden standard. Chemometric analysis was conducted in order to characterize newly synthesized steroid derivatives for further investigation regarding their potential biological activity and further synthesis. This article is based upon work from COST Action (CM1105), supported by COST (European Cooperation in Science and Technology).

Keywords: generalized pair correlation method, molecular descriptors, regression analysis, steroids, sum of ranking differences

Procedia PDF Downloads 317
1133 ANSYS FLUENT Simulation of Natural Convection and Radiation in a Solar Enclosure

Authors: Sireetorn Kuharat, Anwar Beg

Abstract:

In this study, multi-mode heat transfer characteristics of spacecraft solar collectors are investigated computationally. Two-dimensional steady-state incompressible laminar Newtonian viscous convection-radiative heat transfer in a rectangular solar collector geometry. The ANSYS FLUENT finite volume code (version 17.2) is employed to simulate the thermo-fluid characteristics. Several radiative transfer models are employed which are available in the ANSYS workbench, including the classical Rosseland flux model and the more elegant P1 flux model. Mesh-independence tests are conducted. Validation of the simulations is conducted with a computational Harlow-Welch MAC (Marker and Cell) finite difference method and excellent correlation. The influence of aspect ratio, Prandtl number (Pr), Rayleigh number (Ra) and radiative flux model on temperature, isotherms, velocity, the pressure is evaluated and visualized in color plots. Additionally, the local convective heat flux is computed and solutions are compared with the MAC solver for various buoyancy effects (e.g. Ra = 10,000,000) achieving excellent agreement. The P1 model is shown to better predict the actual influence of solar radiative flux on thermal fluid behavior compared with the limited Rosseland model. With increasing Rayleigh numbers the hot zone emanating from the base of the collector is found to penetrate deeper into the collector and rises symmetrically dividing into two vortex regions with very high buoyancy effect (Ra >100,000). With increasing Prandtl number (three gas cases are examined respectively hydrogen gas mixture, air and ammonia gas) there is also a progressive incursion of the hot zone at the solar collector base higher into the solar collector space and simultaneously a greater asymmetric behavior of the dual isothermal zones. With increasing aspect ratio (wider base relative to the height of the solar collector geometry) there is a greater thermal convection pattern around the whole geometry, higher temperatures and the elimination of the cold upper zone associated with lower aspect ratio.

Keywords: thermal convection, radiative heat transfer, solar collector, Rayleigh number

Procedia PDF Downloads 97
1132 An Investigation into Why Liquefaction Charts Work: A Necessary Step toward Integrating the States of Art and Practice

Authors: Tarek Abdoun, Ricardo Dobry

Abstract:

This paper is a systematic effort to clarify why field liquefaction charts based on Seed and Idriss’ Simplified Procedure work so well. This is a necessary step toward integrating the states of the art (SOA) and practice (SOP) for evaluating liquefaction and its effects. The SOA relies mostly on laboratory measurements and correlations with void ratio and relative density of the sand. The SOP is based on field measurements of penetration resistance and shear wave velocity coupled with empirical or semi-empirical correlations. This gap slows down further progress in both SOP and SOA. The paper accomplishes its objective through: a literature review of relevant aspects of the SOA including factors influencing threshold shear strain and pore pressure buildup during cyclic strain-controlled tests; a discussion of factors influencing field penetration resistance and shear wave velocity; and a discussion of the meaning of the curves in the liquefaction charts separating liquefaction from no liquefaction, helped by recent full-scale and centrifuge results. It is concluded that the charts are curves of constant cyclic strain at the lower end (Vs1 < 160 m/s), with this strain being about 0.03 to 0.05% for earthquake magnitude, Mw ≈ 7. It is also concluded, in a more speculative way, that the curves at the upper end probably correspond to a variable increasing cyclic strain and Ko, with this upper end controlled by over consolidated and preshaken sands, and with cyclic strains needed to cause liquefaction being as high as 0.1 to 0.3%. These conclusions are validated by application to case histories corresponding to Mw ≈ 7, mostly in the San Francisco Bay Area of California during the 1989 Loma Prieta earthquake.

Keywords: permeability, lateral spreading, liquefaction, centrifuge modeling, shear wave velocity charts

Procedia PDF Downloads 272
1131 Bio-Guided of Active New Alkaloids from Alstonia Brassi Toxicity Antitumour Activity in Silico and Molecular Modeling

Authors: Mesbah Khaled, Bouraoui Ouissal, Benkiniouar Rachid, Belkhiri Lotfi

Abstract:

Alstonia, which are tropical plants with a wide geographical distribution, have been divided into different sections by different authors based on previous studies of several species within the genus. Monachino divides Alstonia into 5 sections, while Pichon divides it into 3 sections. Several plants belonging to this genus, such as Alstonia brassii, have been used in traditional folk medicine to treat ailments such as fever, malaria and dysentery]. Previous studies focusing on the chemical composition of these plants have successfully identified indol alkaloids with cytotoxic, anti-diabetic and anti-inflammatory properties. The newly discovered monomers are structurally similar to the backbones of picralin, affinisin and macrolin. On the other hand, all recently isolated dimeric compounds have a macrolin moiety. In this study, a computational analysis was performed on a series of novel molecules, including both monomeric and dimeric compounds with different structural frameworks. This investigation represents the first computational study of these molecules using an in silico approach incorporating 2D-QSAR data. The analysis involved various computational techniques, including 2D-QSAR modelling, molecular docking studies and subsequent validation by molecular dynamics simulation and assessment of ADMET properties. The chemical composition was identified by 1D and 2D NMR. Eight new alkaloids were isolated, 5 monomers and 3 dimers. In this section, we focus on the biological activity of 4 new alkaloids belonging to two different skeletons, the affinisine skeleton.

Keywords: affinisine, talcarpine, macroline, cytotoxicity, alkaloids

Procedia PDF Downloads 214
1130 Rt. Side Sleeping Position Prevents Sudden Infant Death Syndrome

Authors: Othman Salim Hussein Al-Fleesy

Abstract:

Background: Studies showed that sudden infant death syndrome (SIDS) has association with sleeping positions. Up-to-date no study explained how could they prevent it? Objectives: 1-To determine which sleeping position is certainly safe one to prevent SIDS. 2-To establish criteria for suggesting definition and making diagnosis for SIDS. 3-To discuss the controversy surrounding SUND, ALTE, NM, as compared to SIDS. Method: This literature review was built on a previous literature. Articles were obtained randomly according to their availability to the author. For the purpose of this work an easy approach was built by modeling an overview on SIDS topic after clarifying the misconception and misinterpretation of a number of controversial issues in regard to SIDS such as: asphyxia, sudden unexpected death among adults (Bangungut or Pokkuri), apparent life threatening event (ALTE), Nightmare, and comparing the findings with the literature review results..By this unique method we got a clue for prevention of Sudden Infant Death Syndrome. Results: The revision revealed with no doubt that no study before have studied right-side sleeping position at all. The author determined right side as the only safe position to preventing SIDS. A new definition for SIDS is suggested. The author postulated a Right side position hypothesis (Alfleesy hypothesis) which is a testable hypothesis in front of all researchers for further study . Conclusion: Our results contradict totally all previous studies and recommendations. We recommended strongly the right side position only for sleeping to prevent SIDS. New definition is suggested and a new hypothesis is postulated.

Keywords: SIDS, ALTE, nightmare, forensic sciences

Procedia PDF Downloads 426