Search results for: random fields
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4315

Search results for: random fields

3685 Homogeneity among Diversity

Authors: Yu Guang

Abstract:

“Case studies are the preferred strategy when ‘how’ or ‘why’ questions are being posed.” Therefore, the study is based on two cases: strategy performed in JingNan War and by NIKE. The two samples are chosen as they are of comparability. Data are gathered and PEST and SWOT are used as analysis models to examine their strategic employment in order that the answer to brilliant strategies in variety is found. The niche strategy has been used in the past and present, in the battle fields and business. The homogeneity among diversity is the skill of performing strategies.

Keywords: challenger, homogeneity, managing diversity, niche strategy

Procedia PDF Downloads 524
3684 Validity of Universe Structure Conception as Nested Vortexes

Authors: Khaled M. Nabil

Abstract:

This paper introduces the Nested Vortexes conception of the universe structure and interprets all the physical phenomena according this conception. The paper first reviews recent physics theories, either in microscopic scale or macroscopic scale, to collect evidence that the space is not empty. But, these theories describe the property of the space medium without determining its structure. Determining the structure of space medium is essential to understand the mechanism that leads to its properties. Without determining the space medium structure, many phenomena; such as electric and magnetic fields, gravity, or wave-particle duality remain uninterpreted. Thus, this paper introduces a conception about the structure of the universe. It assumes that the universe is a medium of ultra-tiny homogeneous particles which are still undiscovered. Like any medium with certain movements, possibly because of a great asymmetric explosion, vortexes have occurred. A vortex condenses the ultra-tiny particles in its center forming a bigger particle, the bigger particles, in turn, could be trapped in a bigger vortex and condense in its center forming a much bigger particle and so on. This conception describes galaxies, stars, protons as particles at different levels. Existing of the particle’s vortexes make the consistency of the speed of light postulate is not true. This conception shows that the vortex motion dynamic agrees with the motion of all the universe particles at any level. An experiment has been carried out to detect the orbiting effect of aggregated vortexes of aligned atoms of a permanent magnet. Based on the described particle’s structure, the gravity force of a particle and attraction between particles as well as charge, electric and magnetic fields and quantum mechanics characteristics are interpreted. All augmented physics phenomena are solved.

Keywords: astrophysics, cosmology, particles’ structure model, particles’ forces

Procedia PDF Downloads 120
3683 Security of Database Using Chaotic Systems

Authors: Eman W. Boghdady, A. R. Shehata, M. A. Azem

Abstract:

Database (DB) security demands permitting authorized users and prohibiting non-authorized users and intruders actions on the DB and the objects inside it. Organizations that are running successfully demand the confidentiality of their DBs. They do not allow the unauthorized access to their data/information. They also demand the assurance that their data is protected against any malicious or accidental modification. DB protection and confidentiality are the security concerns. There are four types of controls to obtain the DB protection, those include: access control, information flow control, inference control, and cryptographic. The cryptographic control is considered as the backbone for DB security, it secures the DB by encryption during storage and communications. Current cryptographic techniques are classified into two types: traditional classical cryptography using standard algorithms (DES, AES, IDEA, etc.) and chaos cryptography using continuous (Chau, Rossler, Lorenz, etc.) or discreet (Logistics, Henon, etc.) algorithms. The important characteristics of chaos are its extreme sensitivity to initial conditions of the system. In this paper, DB-security systems based on chaotic algorithms are described. The Pseudo Random Numbers Generators (PRNGs) from the different chaotic algorithms are implemented using Matlab and their statistical properties are evaluated using NIST and other statistical test-suits. Then, these algorithms are used to secure conventional DB (plaintext), where the statistical properties of the ciphertext are also tested. To increase the complexity of the PRNGs and to let pass all the NIST statistical tests, we propose two hybrid PRNGs: one based on two chaotic Logistic maps and another based on two chaotic Henon maps, where each chaotic algorithm is running side-by-side and starting from random independent initial conditions and parameters (encryption keys). The resulted hybrid PRNGs passed the NIST statistical test suit.

Keywords: algorithms and data structure, DB security, encryption, chaotic algorithms, Matlab, NIST

Procedia PDF Downloads 265
3682 Audit of TPS photon beam dataset for small field output factors using OSLDs against RPC standard dataset

Authors: Asad Yousuf

Abstract:

Purpose: The aim of the present study was to audit treatment planning system beam dataset for small field output factors against standard dataset produced by radiological physics center (RPC) from a multicenter study. Such data are crucial for validity of special techniques, i.e., IMRT or stereotactic radiosurgery. Materials/Method: In this study, multiple small field size output factor datasets were measured and calculated for 6 to 18 MV x-ray beams using the RPC recommend methods. These beam datasets were measured at 10 cm depth for 10 × 10 cm2 to 2 × 2 cm2 field sizes, defined by collimator jaws at 100 cm. The measurements were made with a Landauer’s nanoDot OSLDs whose volume is small enough to gather a full ionization reading even for the 1×1 cm2 field size. At our institute the beam data including output factors have been commissioned at 5 cm depth with an SAD setup. For comparison with the RPC data, the output factors were converted to an SSD setup using tissue phantom ratios. SSD setup also enables coverage of the ion chamber in 2×2 cm2 field size. The measured output factors were also compared with those calculated by Eclipse™ treatment planning software. Result: The measured and calculated output factors are in agreement with RPC dataset within 1% and 4% respectively. The large discrepancies in TPS reflect the increased challenge in converting measured data into a commissioned beam model for very small fields. Conclusion: OSLDs are simple, durable, and accurate tool to verify doses that delivered using small photon beam fields down to a 1x1 cm2 field sizes. The study emphasizes that the treatment planning system should always be evaluated for small field out factors for the accurate dose delivery in clinical setting.

Keywords: small field dosimetry, optically stimulated luminescence, audit treatment, radiological physics center

Procedia PDF Downloads 327
3681 Improving Search Engine Performance by Removing Indexes to Malicious URLs

Authors: Durga Toshniwal, Lokesh Agrawal

Abstract:

As the web continues to play an increasing role in information exchange, and conducting daily activities, computer users have become the target of miscreants which infects hosts with malware or adware for financial gains. Unfortunately, even a single visit to compromised web site enables the attacker to detect vulnerabilities in the user’s applications and force the downloading of multitude of malware binaries. We provide an approach to effectively scan the so-called drive-by downloads on the Internet. Drive-by downloads are result of URLs that attempt to exploit their visitors and cause malware to be installed and run automatically. To scan the web for malicious pages, the first step is to use a crawler to collect URLs that live on the Internet, and then to apply fast prefiltering techniques to reduce the amount of pages that are needed to be examined by precise, but slower, analysis tools (such as honey clients or antivirus programs). Although the technique is effective, it requires a substantial amount of resources. A main reason is that the crawler encounters many pages on the web that are legitimate and needs to be filtered. In this paper, to characterize the nature of this rising threat, we present implementation of a web crawler on Python, an approach to search the web more efficiently for pages that are likely to be malicious, filtering benign pages and passing remaining pages to antivirus program for detection of malwares. Our approaches starts from an initial seed of known, malicious web pages. Using these seeds, our system generates search engines queries to identify other malicious pages that are similar to the ones in the initial seed. By doing so, it leverages the crawling infrastructure of search engines to retrieve URLs that are much more likely to be malicious than a random page on the web. The results shows that this guided approach is able to identify malicious web pages more efficiently when compared to random crawling-based approaches.

Keywords: web crawler, malwares, seeds, drive-by-downloads, security

Procedia PDF Downloads 230
3680 Lineament Analysis as a Method of Mineral Deposit Exploration

Authors: Dmitry Kukushkin

Abstract:

Lineaments form complex grids on Earth's surface. Currently, one particular object of study for many researchers is the analysis and geological interpretation of maps of lineament density in an attempt to locate various geological structures. But lineament grids are made up of global, regional and local components, and this superimposition of lineament grids of various scales (global, regional, and local) renders this method less effective. Besides, the erosion processes and the erosional resistance of rocks lying on the surface play a significant role in the formation of lineament grids. Therefore, specific lineament density map is characterized by poor contrast (most anomalies do not exceed the average values by more than 30%) and unstable relation with local geological structures. Our method allows to confidently determine the location and boundaries of local geological structures that are likely to contain mineral deposits. Maps of the fields of lineament distortion (residual specific density) created by our method are characterized by high contrast with anomalies exceeding the average by upward of 200%, and stable correlation to local geological structures containing mineral deposits. Our method considers a lineament grid as a general lineaments field – surface manifestation of stress and strain fields of Earth associated with geological structures of global, regional and local scales. Each of these structures has its own field of brittle dislocations that appears on the surface of its lineament field. Our method allows singling out local components by suppressing global and regional components of the general lineaments field. The remaining local lineament field is an indicator of local geological structures.The following are some of the examples of the method application: 1. Srednevilyuiskoye gas condensate field (Yakutia) - a direct proof of the effectiveness of methodology; 2. Structure of Astronomy (Taimyr) - confirmed by the seismic survey; 3. Active gold mine of Kadara (Chita Region) – confirmed by geochemistry; 4. Active gold mine of Davenda (Yakutia) - determined the boundaries of the granite massif that controls mineralization; 5. Object, promising to search for hydrocarbons in the north of Algeria - correlated with the results of geological, geochemical and geophysical surveys. For both Kadara and Davenda, the method demonstrated that the intensive anomalies of the local lineament fields are consistent with the geochemical anomalies and indicate the presence of the gold content at commercial levels. Our method of suppression of global and regional components results in isolating a local lineament field. In early stages of a geological exploration for oil and gas, this allows determining boundaries of various geological structures with very high reliability. Therefore, our method allows optimization of placement of seismic profile and exploratory drilling equipment, and this leads to a reduction of costs of prospecting and exploration of deposits, as well as acceleration of its commissioning.

Keywords: lineaments, mineral exploration, oil and gas, remote sensing

Procedia PDF Downloads 305
3679 Conjugal Relationship and Reproductive Decision-Making among Couples in Southwest Nigeria

Authors: Peter Olasupo Ogunjuyigbe, Sarafa Shittu

Abstract:

This paper emphasizes the relevance of conjugal relationship and spousal communication towards enhancing men’s involvement in contraceptive use among the Yorubas of South Western Nigeria. An understanding of males influence and the role they play in reproductive decision making can throw better light on mechanisms through which egalitarianness of husband/wife decision making influences contraceptive use. The objective of this study was to investigate how close conjugal relationships can be a good indicator of joint decision making among couples using data derived from a survey conducted in three states of South Western Nigeria. The study sample consisted of five hundred and twenty one (521) male respondents aged 15-59 years and five hundred and forty seven (547) female respondents aged 15-49 years. The study used both quantitative and qualitative approached to elicit information from the respondents. In order that the study would be truly representative of the towns, each of the study locations in the capital cities was divided into four strata: The traditional area, the migrant area, the mixed area (i.e. traditional and migrant), and the elite area. In the rural areas, selection of the respondents was by simple random sampling technique. However, the random selection was made in such a way that all the different parts of the locations were represented. Generally, the data collected were analysed at univariate, bivariate, and multivariate levels. Logistic regression models were employed to examine the interrelationships between male reproductive behaviour, conjugal relationship and contraceptive use. The study indicates that current use of contraceptive is high among this major ethnic group in Nigeria because of the improved level of communication among couples. The problem, however, is that men still have lower exposure rate when it comes to question of family planning information, education and counseling. This has serious implications on fertility regulation in Nigeria.

Keywords: behavior, conjugal, communication, counseling, spouse

Procedia PDF Downloads 139
3678 On the Relation between λ-Symmetries and μ-Symmetries of Partial Differential Equations

Authors: Teoman Ozer, Ozlem Orhan

Abstract:

This study deals with symmetry group properties and conservation laws of partial differential equations. We give a geometrical interpretation of notion of μ-prolongations of vector fields and of the related concept of μ-symmetry for partial differential equations. We show that these are in providing symmetry reduction of partial differential equations and systems and invariant solutions.

Keywords: λ-symmetry, μ-symmetry, classification, invariant solution

Procedia PDF Downloads 319
3677 Electric Field-Induced Deformation of Particle-Laden Drops and Structuring of Surface Particles

Authors: Alexander Mikkelsen, Khobaib Khobaib, Zbigniew Rozynek

Abstract:

Drops covered by particles have found important uses in various fields, ranging from stabilization of emulsions to production of new advanced materials. Particles at drop interfaces can be interlocked to form solid capsules with properties tailored for a myriad of applications. Despite the huge potential of particle-laden drops and capsules, the knowledge of their deformation and stability are limited. In this regard, we contribute with experimental studies on the deformation and manipulation of silicone oil drops covered with micrometer-sized particles subjected to electric fields. A mixture of silicone oil and particles were immersed in castor oil using a mechanical pipette, forming millimeter sized drops. The particles moved and adsorbed at the drop interfaces by sedimentation, and were structured at the interface by electric field-induced electrohydrodynamic flows. When applying a direct current electric field, free charges accumulated at the drop interfaces, yielding electric stress that deformed the drops. In our experiments, we investigated how particle properties affected drop deformation, break-up, and particle structuring. We found that by increasing the size of weakly-conductive clay particles, the drop shape can go from compressed to stretched out in the direction of the electric field. Increasing the particle size and electrical properties were also found to weaken electrohydrodynamic flows, induce break-up of drops at weaker electric field strengths and structure particles in chains. These particle parameters determine the dipolar force between the interfacial particles, which can yield particle chaining. We conclude that the balance between particle chaining and electrohydrodynamic flows governs the observed drop mechanics.

Keywords: drop deformation, electric field induced stress, electrohydrodynamic flows, particle structuring at drop interfaces

Procedia PDF Downloads 212
3676 Ozone Therapy and Pulsed Electromagnetic Fields Interplay in Controlling Tumor Growth, Symptom and Pain Management: A Case Report

Authors: J. F. Pollo Gaspary, F. Peron Gaspary, E. M. Simão, R. Concatto Beltrame, G. Orengo de Oliveira, M. S. Ristow Ferreira, F. Sartori Thies, I. F. Minello, F. dos Santos de Oliveira

Abstract:

Background: The immune system has evolved several mechanisms to protect the host against cancer, and it has now been suggested that the expansion of its functions may prevent tumor growth and control the symptoms of cancer patients. Two techniques, ozone therapy and pulsed electromagnetic fields (PEMF), are independently associated with an increase in the immune system functions and they maybe help palliative care of patients in these conditions. Case Report: A patient with rectal adenocarcinoma with metastases decides to interrupt the clinical chemotherapy protocol due to refractoriness and side effects. As a palliative care alternative treatment it is suggested to the patient the use of ozone therapy associated with PEMF techniques. Results: The patient reports an improvement in well-being, in autonomy and in pain control. Imaging tests confirm a pause in tumor growth despite more than 60 days without using classic treatment. These results associated with palliative care alternative treatment stimulate the return to the chemotherapy protocol. Discussion: This case illustrates that these two techniques can contribute to the control of tumor growth and refractory symptoms, such as pain, probably by enhancing the immune system. Conclusions: The potential use of the combination of these two therapies, ozone therapy and PEMF therapy, can contribute to palliation of cancer patients, alone or in combination with pharmacological therapies. The conduct of future investigations on this paradigm can elucidate how much these techniques contribute to the survival and well-being of these patients.

Keywords: cancer, complementary and alternative medicine , ozone therapy, palliative care, PEMF therapy

Procedia PDF Downloads 156
3675 Machine learning Assisted Selective Emitter design for Solar Thermophotovoltaic System

Authors: Ambali Alade Odebowale, Andargachew Mekonnen Berhe, Haroldo T. Hattori, Andrey E. Miroshnichenko

Abstract:

Solar thermophotovoltaic systems (STPV) have emerged as a promising solution to overcome the Shockley-Queisser limit, a significant impediment in the direct conversion of solar radiation into electricity using conventional solar cells. The STPV system comprises essential components such as an optical concentrator, selective emitter, and a thermophotovoltaic (TPV) cell. The pivotal element in achieving high efficiency in an STPV system lies in the design of a spectrally selective emitter or absorber. Traditional methods for designing and optimizing selective emitters are often time-consuming and may not yield highly selective emitters, posing a challenge to the overall system performance. In recent years, the application of machine learning techniques in various scientific disciplines has demonstrated significant advantages. This paper proposes a novel nanostructure composed of four-layered materials (SiC/W/SiO2/W) to function as a selective emitter in the energy conversion process of an STPV system. Unlike conventional approaches widely adopted by researchers, this study employs a machine learning-based approach for the design and optimization of the selective emitter. Specifically, a random forest algorithm (RFA) is employed for the design of the selective emitter, while the optimization process is executed using genetic algorithms. This innovative methodology holds promise in addressing the challenges posed by traditional methods, offering a more efficient and streamlined approach to selective emitter design. The utilization of a machine learning approach brings several advantages to the design and optimization of a selective emitter within the STPV system. Machine learning algorithms, such as the random forest algorithm, have the capability to analyze complex datasets and identify intricate patterns that may not be apparent through traditional methods. This allows for a more comprehensive exploration of the design space, potentially leading to highly efficient emitter configurations. Moreover, the application of genetic algorithms in the optimization process enhances the adaptability and efficiency of the overall system. Genetic algorithms mimic the principles of natural selection, enabling the exploration of a diverse range of emitter configurations and facilitating the identification of optimal solutions. This not only accelerates the design and optimization process but also increases the likelihood of discovering configurations that exhibit superior performance compared to traditional methods. In conclusion, the integration of machine learning techniques in the design and optimization of a selective emitter for solar thermophotovoltaic systems represents a groundbreaking approach. This innovative methodology not only addresses the limitations of traditional methods but also holds the potential to significantly improve the overall performance of STPV systems, paving the way for enhanced solar energy conversion efficiency.

Keywords: emitter, genetic algorithm, radiation, random forest, thermophotovoltaic

Procedia PDF Downloads 62
3674 Dislocation Density-Based Modeling of the Grain Refinement in Surface Mechanical Attrition Treatment

Authors: Reza Miresmaeili, Asghar Heydari Astaraee, Fereshteh Dolati

Abstract:

In the present study, an analytical model based on dislocation density model was developed to simulate grain refinement in surface mechanical attrition treatment (SMAT). The correlation between SMAT time and development in plastic strain on one hand, and dislocation density evolution, on the other hand, was established to simulate the grain refinement in SMAT. A dislocation density-based constitutive material law was implemented using VUHARD subroutine. A random sequence of shots is taken into consideration for multiple impacts model using Python programming language by utilizing a random function. The simulation technique was to model each impact in a separate run and then transferring the results of each run as initial conditions for the next run (impact). The developed Finite Element (FE) model of multiple impacts describes the coverage evolution in SMAT. Simulations were run to coverage levels as high as 4500%. It is shown that the coverage implemented in the FE model is equal to the experimental coverage. It is depicted that numerical SMAT coverage parameter is adequately conforming to the well-known Avrami model. Comparison between numerical results and experimental measurements for residual stresses and depth of deformation layers confirms the performance of the established FE model for surface engineering evaluations in SMA treatment. X-ray diffraction (XRD) studies of grain refinement, including resultant grain size and dislocation density, were conducted to validate the established model. The full width at half-maximum in XRD profiles can be used to measure the grain size. Numerical results and experimental measurements of grain refinement illustrate good agreement and show the capability of established FE model to predict the gradient microstructure in SMA treatment.

Keywords: dislocation density, grain refinement, severe plastic deformation, simulation, surface mechanical attrition treatment

Procedia PDF Downloads 138
3673 Translating Discourse Organization Structures Used in Chinese and English Scientific and Engineering Writings

Authors: Ming Qian, Davis Qian

Abstract:

This study compares the different organization structures of Chinese and English writing discourses in the engineering and scientific fields, and recommends approaches for translators to convert the organization structures properly. Based on existing intercultural communication literature, English authors tend to deductively give their main points at the beginning, following with detailed explanations or arguments afterwards while the Chinese authors tend to place their main points inductively towards the end. In this study, this hypothesis has been verified by the authors’ Chinese-to-English translation experiences in the fields of science and engineering (e.g. journal papers, conference papers and monographs). The basic methodology used is the comparison of writings by Chinese authors with writings of the same or similar topic written by English authors in terms of organization structures. Translators should be aware of this nuance, so that instead of limiting themselves to translating the contents of an article in its original structure, they can convert the structures to fill the cross-culture gap. This approach can be controversial because if a translator changes the structure organization of a paragraph (e.g. from a 'because-therefore' inductive structure by a Chinese author to a deductive structure in English), this change of sentence order could be questioned by the original authors. For this reason, translators need to properly inform the original authors on the intercultural differences of English and Chinese writing (e.g. inductive structure versus deductive structure), and work with the original authors to maintain accuracy while converting from one structure used in a source language to another structure in the target language. The authors have incorporated these methodologies into their translation practices and work closely with the authors on the inter-cultural organization structure mapping. Translating discourse organization structure should become a standard practice in the translation process.

Keywords: discourse structure, information structure, intercultural communication, translation practice

Procedia PDF Downloads 441
3672 Challenges of Translation Knowledge for Pediatric Rehabilitation Technology

Authors: Patrice L. Weiss, Barbara Mazer, Tal Krasovsky, Naomi Gefen

Abstract:

Knowledge translation (KT) involves the process of applying the most promising research findings to practical settings, ensuring that new technological discoveries enhance healthcare accessibility, effectiveness, and accountability. This perspective paper aims to discuss and provide examples of how the KT process can be implemented during a time of rapid advancement in rehabilitation technologies, which have the potential to greatly influence pediatric healthcare. The analysis is grounded in a comprehensive systematic review of literature, where key studies from the past 34 years were carefully interpreted by four expert researchers in scientific and clinical fields. This review revealed both theoretical and practical insights into the factors that either facilitate or impede the successful implementation of new rehabilitation technologies. By utilizing the Knowledge-to-Action cycle, which encompasses the knowledge creation funnel and the action cycle, we demonstrated its application in integrating advanced technologies into clinical practice and guiding healthcare policy adjustments. We highlighted three successful technology applications: powered mobility, head support systems, and telerehabilitation. Moreover, we investigated emerging technologies, such as brain-computer interfaces and robotic assistive devices, which face challenges related to cost, durability, and usability. Recommendations include prioritizing early and ongoing design collaborations, transitioning from research to practical implementation, and determining the optimal timing for clinical adoption of new technologies. In conclusion, this paper informs, justifies, and strengthens the knowledge translation process, ensuring it remains relevant, rigorous, and significantly contributes to pediatric rehabilitation and other clinical fields.

Keywords: knowledge translation, rehabilitation technology, pediatrics, barriers, facilitators, stakeholders

Procedia PDF Downloads 29
3671 Effect of Exit Annular Area on the Flow Field Characteristics of an Unconfined Premixed Annular Swirl Burner

Authors: Vishnu Raj, Chockalingam Prathap

Abstract:

The objective of this study was to explore the impact of variation in the exit annular area on the local flow field features and the flame stability of an annular premixed swirl burner (unconfined) operated with premixed n-butane air mixture at equivalence ratio (ϕ) = 1, 1 bar, and 300K. A swirl burner with an axial swirl generator having a swirl number of 1.5 was used. Three different burner heads were chosen to have the exit area increased from 100%, 160%, and 220% resulting in inner and outer diameters and cross-sectional areas as (1) 10mm&15mm, 98mm2 (2) 17.5mm&22.5mm, 157mm2 and (3) 25mm & 30mm, 216mm2. The bulk velocity and Reynolds number based on the hydraulic diameter and unburned gas properties were kept constant at 12 m/s and 4000. (i) Planar PIV with TiO2 seeding particles and (ii) OH* chemiluminescence were used to measure the velocity fields and reaction zones of the swirl flames at 5Hz, respectively. Velocity fields and the jet spreading rates measured at the isothermal and reactive conditions revealed that the presence of a flame significantly altered the flow field in the radial direction due to the gas expansion. Important observations from the flame measurements were: the height and maximum width of the recirculation bubbles normalized by the hydraulic diameter, and the jet spreading angles for the flames for the three exit area cases were: (a) 4.52, 1.95, 28ᵒ, (b) 6.78, 2.37, 34ᵒ, and (c) 8.73, 2.32, 37ᵒ. The lean blowout was also measured, and the respective equivalence ratios were: 0.80, 0.92, and 0.82. LBO was relatively narrow for the 157mm2 case. For this case, particle image velocimetry (PIV) measurements showed that Turbulent Kinetic Energy and turbulent intensity were relatively high compared to the other two cases, resulting in higher stretch rates and narrower lean blowout (LBO).

Keywords: chemiluminescence, jet spreading rate, lean blowout, swirl flow

Procedia PDF Downloads 68
3670 Model Evaluation of Thermal Effects Created by Cell Membrane Electroporation

Authors: Jiahui Song

Abstract:

The use of very high electric fields (~ 100kV/cm or higher) with pulse durations in the nanosecond range has been a recent development. The electric pulses have been used as tools to generate electroporation which has many biomedical applications. Most of the studies of electroporation have ignored possible thermal effects because of the small duration of the applied voltage pulses. However, it has been predicted membrane temperature gradients ranging from 0.2×109 to 109 K/m. This research focuses on thermal gradients that drives for electroporative enhancements, even though the actual temperature values might not have changed appreciably from their equilibrium levels. The dynamics of pore formation with the application of an externally applied electric field is studied on the basis of molecular dynamics (MD) simulations using the GROMACS package. Different temperatures are assigned to various regions to simulate the appropriate temperature gradients. The GROMACS provides the force fields for the lipid membranes, which is taken to comprise of dipalmitoyl-phosphatidyl-choline (DPPC) molecules. The water model mimicks the aqueous environment surrounding the membrane. Velocities of water and membrane molecules are generated randomly at each simulation run according to a Maxwellian distribution. For statistical significance, a total of eight MD simulations are carried out with different starting molecular velocities for each simulation. MD simulation shows no pore is formed in a 10-ns snapshot for a DPPC membrane set at a uniform temperature of 295 K after a 0.4 V/nm electric field is applied. A nano-sized pore is clearly seen in a 10-ns snapshot on the same geometry but with the top and bottom membrane surfaces kept at temperatures of 300 and 295 K, respectively. For the same applied electric field, the formation of nanopores is clearly demonstrated, but only in the presence of a temperature gradient. MD simulation results show enhanced electroporative effects arising from thermal gradients. The study suggests the temperature gradient is a secondary driver, with the electric field being the primary cause for electroporation.

Keywords: nanosecond, electroporation, thermal effects, molecular dynamics

Procedia PDF Downloads 83
3669 Hypoglycemic and Hypolipidemic Effects of Aqueous Flower Extract from Nyctanthes arbor-tristis L.

Authors: Brahmanage S. Rangika, Dinithi C. Peiris

Abstract:

Boiled Aqueous Flower Extract (AFE) of Nyctanthes arbor-tristis L. (Family: Oleaceae) is used in traditional Sri Lankan medicinal system to treat diabetes. However, this is not scientifically proven and the mechanisms by which the flowers reduce diabetes have not been investigated. The present study was carried out to examine the hypoglycemic potential and toxicity effects of aqueous flower extract of N. arbor-tristis. AFE was prepared and mice were treated orally either with 250, 500, and 750 mg/kg of AFE or distilled water (Control). Fasting and random blood glucose levels were determined. In addition, the toxicity of AFE was determined using chronic oral administration. In normoglycemic mice, mid dose (500mg/kg) of AFE significantly (p < 0.01) reduced fasting blood glucose levels by 49% at 4h post treatment. Further, 500mg/kg of AFE significantly (p < 0.01) lowered random blood glucose level of non-fasted normoglycemic mice. AFE significantly lowered total cholesterol and triglyceride levels while increasing the HDL levels in the serum. Further, AFE significantly inhibited the glucose absorption from the lumen of the intestine and it increases the diaphragm uptake of glucose. Alpha-amylase inhibitory activity was also evident. However, AFE did not induce any overt signs of toxicity or hepatotoxicity. There were no adverse effects on food and water intake and body weight of mice during the experimental period. It can be concluded that AFE of N. arbor-tristis posses safe oral anti diabetic potentials mediated via multiple mechanisms. Results of the present study scientifically proved the claims made about the uses of N. arbor-tristis in the treatment of diabetes mellitus in traditional Sri Lankan medicinal system. Further, flowers can also be used for as a remedy to improve blood lipid profile.

Keywords: aqueous extract, hypoglycemic hypolipidemic, Nyctanthes arbor-tristis flowers, hepatotoxicity

Procedia PDF Downloads 371
3668 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models

Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach

Abstract:

In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.

Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model

Procedia PDF Downloads 186
3667 Evaluating Habitat Manipulation as a Strategy for Rodent Control in Agricultural Ecosystems of Pothwar Region, Pakistan

Authors: Nadeem Munawar, Tariq Mahmood

Abstract:

Habitat manipulation is an important technique that can be used for controlling rodent damage in agricultural ecosystems. It involves intentionally manipulation of vegetation cover in adjacent habitats around the active burrows of rodents to reduce shelter, food availability and to increase predation pressure. The current study was conducted in the Pothwar Plateau during the respective non-crop period of wheat-groundnut (post-harvested and un-ploughed/non-crop fallow lands) with the aim to assess the impact of the reduction in vegetation height of adjacent habitats (field borders) on rodent’s richness and abundance. The study area was divided into two sites viz. treated and non-treated. At the treated sites, habitat manipulation was carried out by removing crop cache, and non-crop vegetation’s over 10 cm in height to a distance of approximately 20 m from the fields. The trapping sessions carried out at both treated and non-treated sites adjacent to wheat-groundnut fields were significantly different (F 2, 6 = 13.2, P = 0.001) from each other, which revealed that a maximum number of rodents were captured from non-treated sites. There was a significant difference in the overall abundance of rodents (P < 0.05) between crop stages and between treatments in both crops. The manipulation effect was significantly observed on damage to crops, and yield production resulted in the reduction of damage within the associated croplands (P < 0.05). The outcomes of this study indicated a significant reduction of rodent population at treated sites due to changes in vegetation height and cover which affect important components, i.e., food, shelter, movements and increased risk sensitivity in their feeding behavior; therefore, they were unable to reach levels where they cause significant crop damage. This method is recommended for being a cost-effective and easy application.

Keywords: agricultural ecosystems, crop damage, habitat manipulation, rodents, trapping

Procedia PDF Downloads 166
3666 The Battle between French and English in the Algerian University: Ideological and Pedagogical Stakes

Authors: Taoufik Djennane

Abstract:

Algeria is characterized by a fragmented language education policy. While pre-university education is entirely conducted in Arabic, higher education remains linguistically divided, with some fields offered in Arabic and others exclusively based on French. Within this linguistic policy, English remains far behind French. However, there has been a significant shift in the state’s linguistic orientation since the social riot of March 2019, known as El-Hirak, which ousted away the ex-president. Since then, social calls were voiced to get rid of French, and English started to receive an unprecedented political push. The historical decision only came at the beginning of the academic year 2023-2024 when the ministry of higher education imposed English as medium of instruction (hereafter EMI), especially in scientific and technological fields. As such, this paper considered this abrupt switch in the medium of instruction and its effects on the community of teachers. Building on a socio-psychological approach, teachers’ attitudes towards EMI were measured. Data were collected using classroom observation, semi-structured interviews and a survey. The results showed that a clear majority of teachers hold negative attitudes towards EMI. The point is that they are linguistically incompetent, and they are not ready yet to deliver content subjects in a language they have no, or little, command of. The study showed the importance of considering attitudes in the ‘policy-formation’ stage before the ‘implementation’ stage. The findings also proved that teachers are not passive bystanders; they can rather be the final arbiters imposing themselves as policy-makers resisting ministerial instructions through their linguistic practices inside the classroom which only acknowledge French. The study showed the necessity to avoid sudden switch and opt for gradual change, without putting aside those who are directly concerned with political/pedagogical measures (teachers, learners, etc).

Keywords: micro planning, EMI, language education policy, agency

Procedia PDF Downloads 75
3665 Genetic Instabilities in Marine Bivalve Following Benzo(α)pyrene Exposure: Utilization of Combined Random Amplified Polymorphic DNA and Comet Assay

Authors: Mengjie Qu, Yi Wang, Jiawei Ding, Siyu Chen, Yanan Di

Abstract:

Marine ecosystem is facing intensified multiple stresses caused by environmental contaminants from human activities. Xenobiotics, such as benzo(α)pyrene (BaP) have been discharged into marine environment and cause hazardous impacts on both marine organisms and human beings. As a filter-feeder, marine mussels, Mytilus spp., has been extensively used to monitor the marine environment. However, their genomic alterations induced by such xenobiotics are still kept unknown. In the present study, gills, as the first defense barrier in mussels, were selected to evaluate the genetic instability alterations induced by the exposure to BaP both in vivo and in vitro. Both random amplified polymorphic DNA (RAPD) assay and comet assay were applied as the rapid tools to assess the environmental stresses due to their low money- and time-consumption. All mussels were identified to be the single species of Mytilus coruscus before used in BaP exposure at the concentration of 56 μg/l for 1 & 3 days (in vivo exposure) or 1 & 3 hours (in vitro). Both RAPD and comet assay results were showed significantly increased genomic instability with time-specific altering pattern. After the recovery period in 'in vivo' exposure, the genomic status was as same as control condition. However, the relative higher genomic instabilities were still observed in gill cells after the recovery from in vitro exposure condition. Different repair mechanisms or signaling pathway might be involved in the isolated gill cells in the comparison with intact tissues. The study provides the robust and rapid techniques to exam the genomic stability in marine organisms in response to marine environmental changes and provide basic information for further mechanism research in stress responses in marine organisms.

Keywords: genotoxic impacts, in vivo/vitro exposure, marine mussels, RAPD and comet assay

Procedia PDF Downloads 280
3664 The Physics of Cold Spray Technology

Authors: Ionel Botef

Abstract:

Studies show that, for qualitative coatings, the knowledge of cold spray technology must focus on a variety of interdisciplinary fields and a framework for problem solving. The integrated disciplines include, but are not limited to, engineering, material sciences, and physics. Due to its importance, the purpose of this paper is to summarize the state of the art of this technology alongside its theoretical and experimental studies, and explore the role and impact of physics upon cold spraying technology.

Keywords: surface engineering, cold spray, physics, modelling

Procedia PDF Downloads 531
3663 Constructing the Joint Mean-Variance Regions for Univariate and Bivariate Normal Distributions: Approach Based on the Measure of Cumulative Distribution Functions

Authors: Valerii Dashuk

Abstract:

The usage of the confidence intervals in economics and econometrics is widespread. To be able to investigate a random variable more thoroughly, joint tests are applied. One of such examples is joint mean-variance test. A new approach for testing such hypotheses and constructing confidence sets is introduced. Exploring both the value of the random variable and its deviation with the help of this technique allows checking simultaneously the shift and the probability of that shift (i.e., portfolio risks). Another application is based on the normal distribution, which is fully defined by mean and variance, therefore could be tested using the introduced approach. This method is based on the difference of probability density functions. The starting point is two sets of normal distribution parameters that should be compared (whether they may be considered as identical with given significance level). Then the absolute difference in probabilities at each 'point' of the domain of these distributions is calculated. This measure is transformed to a function of cumulative distribution functions and compared to the critical values. Critical values table was designed from the simulations. The approach was compared with the other techniques for the univariate case. It differs qualitatively and quantitatively in easiness of implementation, computation speed, accuracy of the critical region (theoretical vs. real significance level). Stable results when working with outliers and non-normal distributions, as well as scaling possibilities, are also strong sides of the method. The main advantage of this approach is the possibility to extend it to infinite-dimension case, which was not possible in the most of the previous works. At the moment expansion to 2-dimensional state is done and it allows to test jointly up to 5 parameters. Therefore the derived technique is equivalent to classic tests in standard situations but gives more efficient alternatives in nonstandard problems and on big amounts of data.

Keywords: confidence set, cumulative distribution function, hypotheses testing, normal distribution, probability density function

Procedia PDF Downloads 176
3662 Non-Linear Velocity Fields in Turbulent Wave Boundary Layer

Authors: Shamsul Chowdhury

Abstract:

The objective of this paper is to present the detailed analysis of the turbulent wave boundary layer produced by progressive finite-amplitude waves theory. Most of the works have done for the mass transport in the turbulent boundary layer assuming the eddy viscosity is not time varying, where the sediment movement is induced by the mean velocity. Near the ocean bottom, the waves produce a thin turbulent boundary layer, where the flow is highly rotational, and shear stress associated with the fluid motion cannot be neglected. The magnitude and the predominant direction of the sediment transport near the bottom are known to be closely related to the flow in the wave induced boundary layer. The magnitude of water particle velocity at the Crest phase differs from the one of the Trough phases due to the non-linearity of the waves, which plays an important role to determine the sediment movement. The non-linearity of the waves become predominant in the surf zone area, where the sediment movement occurs vigorously. Therefore, in order to describe the flow near the bottom and relationship between the flow and the movement of the sediment, the analysis was done using the non-linear boundary layer equation and the finite amplitude wave theory was applied to represent the velocity fields in the turbulent wave boundary layer. At first, the calculation was done for turbulent wave boundary layer by two-dimensional model where throughout the calculation is non-linear. But Stokes second order wave profile is adopted at the upper boundary. The calculated profile was compared with the experimental data. Finally, the calculation is done based on various modes of the velocity and turbulent energy. The mean velocity is found to differ from condition of the relative depth and the roughness. It is also found that due to non-linearity, the absolute value for velocity and turbulent energy as well as Reynolds stress are asymmetric. The mean velocity of the laminar boundary layer is always positive but in the turbulent boundary layer plays a very complicated role.

Keywords: wave boundary, mass transport, mean velocity, shear stress

Procedia PDF Downloads 262
3661 Monitoring Future Climate Changes Pattern over Major Cities in Ghana Using Coupled Modeled Intercomparison Project Phase 5, Support Vector Machine, and Random Forest Modeling

Authors: Stephen Dankwa, Zheng Wenfeng, Xiaolu Li

Abstract:

Climate change is recently gaining the attention of many countries across the world. Climate change, which is also known as global warming, referring to the increasing in average surface temperature has been a concern to the Environmental Protection Agency of Ghana. Recently, Ghana has become vulnerable to the effect of the climate change as a result of the dependence of the majority of the population on agriculture. The clearing down of trees to grow crops and burning of charcoal in the country has been a contributing factor to the rise in temperature nowadays in the country as a result of releasing of carbon dioxide and greenhouse gases into the air. Recently, petroleum stations across the cities have been on fire due to this climate changes and which have position Ghana in a way not able to withstand this climate event. As a result, the significant of this research paper is to project how the rise in the average surface temperature will be like at the end of the mid-21st century when agriculture and deforestation are allowed to continue for some time in the country. This study uses the Coupled Modeled Intercomparison Project phase 5 (CMIP5) experiment RCP 8.5 model output data to monitor the future climate changes from 2041-2050, at the end of the mid-21st century over the ten (10) major cities (Accra, Bolgatanga, Cape Coast, Koforidua, Kumasi, Sekondi-Takoradi, Sunyani, Ho, Tamale, Wa) in Ghana. In the models, Support Vector Machine and Random forest, where the cities as a function of heat wave metrics (minimum temperature, maximum temperature, mean temperature, heat wave duration and number of heat waves) assisted to provide more than 50% accuracy to predict and monitor the pattern of the surface air temperature. The findings identified were that the near-surface air temperature will rise between 1°C-2°C (degrees Celsius) over the coastal cities (Accra, Cape Coast, Sekondi-Takoradi). The temperature over Kumasi, Ho and Sunyani by the end of 2050 will rise by 1°C. In Koforidua, it will rise between 1°C-2°C. The temperature will rise in Bolgatanga, Tamale and Wa by 0.5°C by 2050. This indicates how the coastal and the southern part of the country are becoming hotter compared with the north, even though the northern part is the hottest. During heat waves from 2041-2050, Bolgatanga, Tamale, and Wa will experience the highest mean daily air temperature between 34°C-36°C. Kumasi, Koforidua, and Sunyani will experience about 34°C. The coastal cities (Accra, Cape Coast, Sekondi-Takoradi) will experience below 32°C. Even though, the coastal cities will experience the lowest mean temperature, they will have the highest number of heat waves about 62. Majority of the heat waves will last between 2 to 10 days with the maximum 30 days. The surface temperature will continue to rise by the end of the mid-21st century (2041-2050) over the major cities in Ghana and so needs to be addressed to the Environmental Protection Agency in Ghana in order to mitigate this problem.

Keywords: climate changes, CMIP5, Ghana, heat waves, random forest, SVM

Procedia PDF Downloads 201
3660 Multilevel Regression Model - Evaluate Relationship Between Early Years’ Activities of Daily Living and Alzheimer’s Disease Onset Accounting for Influence of Key Sociodemographic Factors Using a Longitudinal Household Survey Data

Authors: Linyi Fan, C.J. Schumaker

Abstract:

Background: Biomedical efforts to treat Alzheimer’s disease (AD) have typically produced mixed to poor results, while more lifestyle-focused treatments such as exercise may fare better than existing biomedical treatments. A few promising studies have indicated that activities of daily life (ADL) may be a useful way of predicting AD. However, the existing cross-sectional studies fail to show how functional-related issues such as ADL in early years predict AD and how social factors influence health either in addition to or in interaction with individual risk factors. This study would helpbetterscreening and early treatments for the elderly population and healthcare practice. The findings have significance academically and practically in terms of creating positive social change. Methodology: The purpose of this quantitative historical, correlational study was to examine the relationship between early years’ ADL and the development of AD in later years. The studyincluded 4,526participantsderived fromRAND HRS dataset. The Health and Retirement Study (HRS) is a longitudinal household survey data set that is available forresearchof retirement and health among the elderly in the United States. The sample was selected by the completion of survey questionnaire about AD and dementia. The variablethat indicates whether the participant has been diagnosed with AD was the dependent variable. The ADL indices and changes in ADL were the independent variables. A four-step multilevel regression model approach was utilized to address the research questions. Results: Amongst 4,526 patients who completed the AD and dementia questionnaire, 144 (3.1%) were diagnosed with AD. Of the 4,526 participants, 3,465 (76.6%) have high school and upper education degrees,4,074 (90.0%) were above poverty threshold. The model evaluatedthe effect of ADL and change in ADL on onset of AD in late years while allowing the intercept of the model to vary by level of education. The results suggested that the only significant predictor of the onset of AD was changes in early years’ ADL (b = 20.253, z = 2.761, p < .05). However, the result of the sensitivity analysis (b = 7.562, z = 1.900, p =.058), which included more control variables and increased the observation period of ADL, are not supported this finding. The model also estimated whether the variances of random effect vary by Level-2 variables. The results suggested that the variances associated with random slopes were approximately zero, suggesting that the relationship between early years’ ADL were not influenced bysociodemographic factors. Conclusion: The finding indicated that an increase in changes in ADL leads to an increase in the probability of onset AD in the future. However, this finding is not support in a broad observation period model. The study also failed to reject the hypothesis that the sociodemographic factors explained significant amounts of variance in random effect. Recommendations were then made for future research and practice based on these limitations and the significance of the findings.

Keywords: alzheimer’s disease, epidemiology, moderation, multilevel modeling

Procedia PDF Downloads 135
3659 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems

Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber

Abstract:

Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.

Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement

Procedia PDF Downloads 152
3658 Synthesis and Characterization of Anti-Psychotic Drugs Based DNA Aptamers

Authors: Shringika Soni, Utkarsh Jain, Nidhi Chauhan

Abstract:

Aptamers are recently discovered ~80-100 bp long artificial oligonucleotides that not only demonstrated their applications in therapeutics; it is tremendously used in diagnostic and sensing application to detect different biomarkers and drugs. Synthesizing aptamers for proteins or genomic template is comparatively feasible in laboratory, but drugs or other chemical target based aptamers require major specification and proper optimization and validation. One has to optimize all selection, amplification, and characterization steps of the end product, which is extremely time-consuming. Therefore, we performed asymmetric PCR (polymerase chain reaction) for random oligonucleotides pool synthesis, and further use them in Systematic evolution of ligands by exponential enrichment (SELEX) for anti-psychotic drugs based aptamers synthesis. Anti-psychotic drugs are major tranquilizers to control psychosis for proper cognitive functions. Though their low medical use, their misuse may lead to severe medical condition as addiction and can promote crime in social and economical impact. In this work, we have approached the in-vitro SELEX method for ssDNA synthesis for anti-psychotic drugs (in this case ‘target’) based aptamer synthesis. The study was performed in three stages, where first stage included synthesis of random oligonucleotides pool via asymmetric PCR where end product was analyzed with electrophoresis and purified for further stages. The purified oligonucleotide pool was incubated in SELEX buffer, and further partition was performed in the next stage to obtain target specific aptamers. The isolated oligonucleotides are characterized and quantified after each round of partition, and significant results were obtained. After the repetitive partition and amplification steps of target-specific oligonucleotides, final stage included sequencing of end product. We can confirm the specific sequence for anti-psychoactive drugs, which will be further used in diagnostic application in clinical and forensic set-up.

Keywords: anti-psychotic drugs, aptamer, biosensor, ssDNA, SELEX

Procedia PDF Downloads 135
3657 Discrete-Event Modeling and Simulation Methodologies: Past, Present and Future

Authors: Gabriel Wainer

Abstract:

Modeling and Simulation methods have been used to better analyze the behavior of complex physical systems, and it is now common to use simulation as a part of the scientific and technological discovery process. M&S advanced thanks to the improvements in computer technology, which, in many cases, resulted in the development of simulation software using ad-hoc techniques. Formal M&S appeared in order to try to improve the development task of very complex simulation systems. Some of these techniques proved to be successful in providing a sound base for the development of discrete-event simulation models, improving the ease of model definition and enhancing the application development tasks; reducing costs and favoring reuse. The DEVS formalism is one of these techniques, which proved to be successful in providing means for modeling while reducing development complexity and costs. DEVS model development is based on a sound theoretical framework. The independence of M&S tasks made possible to run DEVS models on different environments (personal computers, parallel computers, real-time equipment, and distributed simulators) and middleware. We will present a historical perspective of discrete-event M&S methodologies, showing different modeling techniques. We will introduce DEVS origins and general ideas, and compare it with some of these techniques. We will then show the current status of DEVS M&S, and we will discuss a technological perspective to solve current M&S problems (including real-time simulation, interoperability, and model-centered development techniques). We will show some examples of the current use of DEVS, including applications in different fields. We will finally show current open topics in the area, which include advanced methods for centralized, parallel or distributed simulation, the need for real-time modeling techniques, and our view in these fields.

Keywords: modeling and simulation, discrete-event simulation, hybrid systems modeling, parallel and distributed simulation

Procedia PDF Downloads 323
3656 Oil-Oil Correlation Using Polar and Non-Polar Fractions of Crude Oil: A Case Study in Iranian Oil Fields

Authors: Morteza Taherinezhad, Ahmad Reza Rabbani, Morteza Asemani, Rudy Swennen

Abstract:

Oil-oil correlation is one of the most important issues in geochemical studies that enables to classify oils genetically. Oil-oil correlation is generally estimated based on non-polar fractions of crude oil (e.g., saturate and aromatic compounds). Despite several advantages, the drawback of using these compounds is their susceptibility of being affected by secondary processes. The polar fraction of crude oil (e.g., asphaltenes) has similar characteristics to kerogen, and this structural similarity is preserved during migration, thermal maturation, biodegradation, and water washing. Therefore, these structural characteristics can be considered as a useful correlation parameter, and it can be concluded that asphaltenes from different reservoirs with the same genetic signatures have a similar origin. Hence in this contribution, an integrated study by using both non-polar and polar fractions of oil was performed to use the merits of both fractions. Therefore, five oil samples from oil fields in the Persian Gulf were studied. Structural characteristics of extracted asphaltenes were investigated by Fourier transform infrared (FTIR) spectroscopy. Graphs based on aliphatic and aromatic compounds (predominant compounds in asphaltenes structure) and sulphoxide and carbonyl functional groups (which are representatives of sulphur and oxygen abundance in asphaltenes) were used for comparison of asphaltenes structures in different samples. Non-polar fractions were analyzed by GC-MS. The study of asphaltenes showed the studied oil samples comprise two oil families with distinct genetic characteristics. The first oil family consists of Salman and Reshadat oil samples, and the second oil family consists of Resalat, Siri E, and Siri D oil samples. To validate our results, biomarker parameters were employed, and this approach completely confirmed previous results. Based on biomarker analyses, both oil families have a marine source rock, whereby marl and carbonate source rocks are the source rock for the first and the second oil family, respectively.

Keywords: biomarker, non-polar fraction, oil-oil correlation, petroleum geochemistry, polar fraction

Procedia PDF Downloads 137