Search results for: multi-scale computational modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3662

Search results for: multi-scale computational modelling

602 Risk and Emotion: Measuring the Effect of Emotion and Other Visceral Factors on Decision Making under Risk

Authors: Michael Mihalicz, Aziz Guergachi

Abstract:

Background: The science of modelling choice preferences has evolved over centuries into an interdisciplinary field contributing to several branches of Microeconomics and Mathematical Psychology. Early theories in Decision Science rested on the logic of rationality, but as it and related fields matured, descriptive theories emerged capable of explaining systematic violations of rationality through cognitive mechanisms underlying the thought processes that guide human behaviour. Cognitive limitations are not, however, solely responsible for systematic deviations from rationality and many are now exploring the effect of visceral factors as the more dominant drivers. The current study builds on the existing literature by exploring sleep deprivation, thermal comfort, stress, hunger, fear, anger and sadness as moderators to three distinct elements that define individual risk preference under Cumulative Prospect Theory. Methodology: This study is designed to compare the risk preference of participants experiencing an elevated affective or visceral state to those in a neutral state using nonparametric elicitation methods across three domains. Two experiments will be conducted simultaneously using different methodologies. The first will determine visceral states and risk preferences randomly over a two-week period by prompting participants to complete an online survey remotely. In each round of questions, participants will be asked to self-assess their current state using Visual Analogue Scales before answering a series of lottery-style elicitation questions. The second experiment will be conducted in a laboratory setting using psychological primes to induce a desired state. In this experiment, emotional states will be recorded using emotion analytics and used a basis for comparison between the two methods. Significance: The expected results include a series of measurable and systematic effects on the subjective interpretations of gamble attributes and evidence supporting the proposition that a portion of the variability in human choice preferences unaccounted for by cognitive limitations can be explained by interacting visceral states. Significant results will promote awareness about the subconscious effect that emotions and other drive states have on the way people process and interpret information, and can guide more effective decision making by informing decision-makers of the sources and consequences of irrational behaviour.

Keywords: decision making, emotions, prospect theory, visceral factors

Procedia PDF Downloads 145
601 Computer Aided Shoulder Prosthesis Design and Manufacturing

Authors: Didem Venus Yildiz, Murat Hocaoglu, Murat Dursun, Taner Akkan

Abstract:

The shoulder joint is a more complex structure than the hip or knee joints. In addition to the overall complexity of the shoulder joint, two different factors influence the insufficient outcome of shoulder replacement: the shoulder prosthesis design is far from fully developed and it is difficult to place these shoulder prosthesis due to shoulder anatomy. The glenohumeral joint is the most complex joint of the human shoulder. There are various treatments for shoulder failures such as total shoulder arthroplasty, reverse total shoulder arthroplasty. Due to its reverse design than normal shoulder anatomy, reverse total shoulder arthroplasty has different physiological and biomechanical properties. Post-operative achievement of this arthroplasty is depend on improved design of reverse total shoulder prosthesis. Designation achievement can be increased by several biomechanical and computational analysis. In this study, data of human both shoulders with right side fracture was collected by 3D Computer Tomography (CT) machine in dicom format. This data transferred to 3D medical image processing software (Mimics Materilise, Leuven, Belgium) to reconstruct patient’s left and right shoulders’ bones geometry. Provided 3D geometry model of the fractured shoulder was used to constitute of reverse total shoulder prosthesis by 3-matic software. Finite element (FE) analysis was conducted for comparison of intact shoulder and prosthetic shoulder in terms of stress distribution and displacements. Body weight physiological reaction force of 800 N loads was applied. Resultant values of FE analysis was compared for both shoulders. The analysis of the performance of the reverse shoulder prosthesis could enhance the knowledge of the prosthetic design.

Keywords: reverse shoulder prosthesis, biomechanics, finite element analysis, 3D printing

Procedia PDF Downloads 150
600 Insight into the Binding Theme of CA-074Me to Cathepsin B: Molecular Dynamics Simulations and Scaffold Hopping to Identify Potential Analogues as Anti-Neurodegenerative Diseases

Authors: Tivani Phosa Mashamba-Thompson, Mahmoud E. S. Soliman

Abstract:

To date, the cause of neurodegeneration is not well understood and diseases that stem from neurodegeneration currently have no known cures. Cathepsin B (CB) enzyme is known to be involved in the production of peptide neurotransmitters and toxic peptides in neurodegenerative diseases (NDs). CA-074Me is a membrane-permeable irreversible selective cathepsin B (CB) inhibitor as confirmed by in vivo studies. Due to the lack of the crystal structure, the binding mode of CA-074Me with the human CB at molecular level has not been previously reported. The main aim of this study is to gain an insight into the binding mode of CB CA-074Me to human CB using various computational tools. Herein, molecular dynamics simulations, binding free energy calculations and per-residue energy decomposition analysis were employed to accomplish the aim of the study. Another objective was to identify novel CB inhibitors based on the structure of CA-074Me using fragment based drug design using scaffold hoping drug design approach. Results showed that two of the designed ligands (hit 1 and hit 2) were found to have better binding affinities than the prototype inhibitor, CA-074Me, by ~2-3 kcal/mol. Per-residue energy decomposition showed that amino acid residues Cys29, Gly196, His197 and Val174 contributed the most towards the binding. The Van der Waals binding forces were found to be the major component of the binding interactions. The findings of this study should assist medicinal chemist towards the design of potential irreversible CB inhibitors.

Keywords: cathepsin B, scaffold hopping, docking, molecular dynamics, binding-free energy, neurodegerative diseases

Procedia PDF Downloads 373
599 Modelling the Behavior of Commercial and Test Textiles against Laundering Process by Statistical Assessment of Their Performance

Authors: M. H. Arslan, U. K. Sahin, H. Acikgoz-Tufan, I. Gocek, I. Erdem

Abstract:

Various exterior factors have perpetual effects on textile materials during wear, use and laundering in everyday life. In accordance with their frequency of use, textile materials are required to be laundered at certain intervals. The medium in which the laundering process takes place have inevitable detrimental physical and chemical effects on textile materials caused by the unique parameters of the process inherently existing. Connatural structures of various textile materials result in many different physical, chemical and mechanical characteristics. Because of their specific structures, these materials have different behaviors against several exterior factors. By modeling the behavior of commercial and test textiles as group-wise against laundering process, it is possible to disclose the relation in between these two groups of materials, which will lead to better understanding of their behaviors in terms of similarities and differences against the washing parameters of the laundering. Thus, the goal of the current research is to examine the behavior of two groups of textile materials as commercial textiles and as test textiles towards the main washing machine parameters during laundering process such as temperature, load quantity, mechanical action and level of water amount by concentrating on shrinkage, pilling, sewing defects, collar abrasion, the other defects other than sewing, whitening and overall properties of textiles. In this study, cotton fabrics were preferred as commercial textiles due to the fact that garments made of cotton are the most demanded products in the market by the textile consumers in daily life. Full factorial experimental set-up was used to design the experimental procedure. All profiles always including all of the commercial and the test textiles were laundered for 20 cycles by commercial home laundering machine to investigate the effects of the chosen parameters. For the laundering process, a modified version of ‘‘IEC 60456 Test Method’’ was utilized. The amount of detergent was altered as 0.5% gram per liter depending on varying load quantity levels. Datacolor 650®, EMPA Photographic Standards for Pilling Test and visual examination were utilized to test and characterize the textiles. Furthermore, in the current study the relation in between commercial and test textiles in terms of their performance was deeply investigated by the help of statistical analysis performed by MINITAB® package program modeling their behavior against the parameters of the laundering process. In the experimental work, the behaviors of both groups of textiles towards washing machine parameters were visually and quantitatively assessed in dry state.

Keywords: behavior against washing machine parameters, performance evaluation of textiles, statistical analysis, commercial and test textiles

Procedia PDF Downloads 353
598 Biophysical Consideration in the Interaction of Biological Cell Membranes with Virus Nanofilaments

Authors: Samaneh Farokhirad, Fatemeh Ahmadpoor

Abstract:

Biological membranes are constantly in contact with various filamentous soft nanostructures that either reside on their surface or are being transported between the cell and its environment. In particular, viral infections are determined by the interaction of viruses (such as filovirus) with cell membranes, membrane protein organization (such as cytoskeletal proteins and actin filament bundles) has been proposed to influence the mechanical properties of lipid membranes, and the adhesion of filamentous nanoparticles influence their delivery yield into target cells or tissues. The goal of this research is to integrate the rapidly increasing but still fragmented experimental observations on the adhesion and self-assembly of nanofilaments (including filoviruses, actin filaments, as well as natural and synthetic nanofilaments) on cell membranes into a general, rigorous, and unified knowledge framework. The global outbreak of the coronavirus disease in 2020, which has persisted for over three years, highlights the crucial role that nanofilamentbased delivery systems play in human health. This work will unravel the role of a unique property of all cell membranes, namely flexoelectricity, and the significance of nanofilaments’ flexibility in the adhesion and self-assembly of nanofilaments on cell membranes. This will be achieved utilizing a set of continuum mechanics, statistical mechanics, and molecular dynamics and Monte Carlo simulations. The findings will help address the societal needs to understand biophysical principles that govern the attachment of filoviruses and flexible nanofilaments onto the living cells and provide guidance on the development of nanofilament-based vaccines for a range of diseases, including infectious diseases and cancer.

Keywords: virus nanofilaments, cell mechanics, computational biophysics, statistical mechanics

Procedia PDF Downloads 86
597 Molecular Electron Density Theory Study on the Mechanism and Selectivity of the 1,3 Dipolar Cycloaddition Reaction of N-Methyl-C-(2-Furyl) Nitrone with Activated Alkenes

Authors: Moulay Driss Mellaoui, Abdallah Imjjad, Rachid Boutiddar, Haydar Mohammad-Salim, Nivedita Acharjee, Hassan Bourzi, Souad El Issami, Khalid Abbiche, Hanane Zejli

Abstract:

We have investigated the underlying molecular processes involved in the [3+2] cycloaddition (32CA) reactions between N-methyl-C-(2-furyl) nitrone and three acetylene derivatives: 4b, 5b, and 6b. For this investigation, we utilized molecular electron density theory (MEDT) and density functional theory (DFT) methods at the B3LYP-D3/6 31G (d) computational level. These 32CA reactions, which exhibit a zwitterionic (zw-type) nature, proceed through a one-step mechanism with activation enthalpies ranging from 8.80 to 14.37 kcal mol−1 in acetonitrile and ethanol solvents. When the nitrone reacts with phenyl methyl propiolate (4b), two regioisomeric pathways lead to the formation of two products: P1,5-4b and P1,4-4b. On the other hand, when the nitrone reacts with dimethyl acetylene dicarboxylate (5b) and acetylene dicarboxylic acid (but-2-ynedioic acid) (6b), it results in the formation of a single product. Through topological analysis, we can categorize the nitrone as a zwitterionic three-atom component (TAC). Furthermore, the analysis of conceptual density functional theory (CDFT) indices classifies the 32CA reactions of the nitrone with 4b, 5b, and 6b as forward electron density flux (FEDF) reactions. The study of bond evolution theory (BET) reveals that the formation of new C-C and C-O covalent bonds does not initiate in the transition states, as the intermediate stages of these reactions display pseudoradical centers of the atoms already involved in bonding.

Keywords: 4-isoxazoline, DFT/B3LYP-D3, regioselectivity, cycloaddition reaction, MEDT, ELF

Procedia PDF Downloads 172
596 Simplified Stress Gradient Method for Stress-Intensity Factor Determination

Authors: Jeries J. Abou-Hanna

Abstract:

Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.

Keywords: fracture mechanics, finite element method, stress intensity factor, stress gradient

Procedia PDF Downloads 132
595 [Keynote Talk]: Applying p-Balanced Energy Technique to Solve Liouville-Type Problems in Calculus

Authors: Lina Wu, Ye Li, Jia Liu

Abstract:

We are interested in solving Liouville-type problems to explore constancy properties for maps or differential forms on Riemannian manifolds. Geometric structures on manifolds, the existence of constancy properties for maps or differential forms, and energy growth for maps or differential forms are intertwined. In this article, we concentrate on discovery of solutions to Liouville-type problems where manifolds are Euclidean spaces (i.e. flat Riemannian manifolds) and maps become real-valued functions. Liouville-type results of vanishing properties for functions are obtained. The original work in our research findings is to extend the q-energy for a function from finite in Lq space to infinite in non-Lq space by applying p-balanced technique where q = p = 2. Calculation skills such as Hölder's Inequality and Tests for Series have been used to evaluate limits and integrations for function energy. Calculation ideas and computational techniques for solving Liouville-type problems shown in this article, which are utilized in Euclidean spaces, can be universalized as a successful algorithm, which works for both maps and differential forms on Riemannian manifolds. This innovative algorithm has a far-reaching impact on research work of solving Liouville-type problems in the general settings involved with infinite energy. The p-balanced technique in this algorithm provides a clue to success on the road of q-energy extension from finite to infinite.

Keywords: differential forms, holder inequality, Liouville-type problems, p-balanced growth, p-harmonic maps, q-energy growth, tests for series

Procedia PDF Downloads 227
594 Multidimensional Modeling of Solidification Process of Multi-Crystalline Silicon under Magnetic Field for Solar Cell Technology

Authors: Mouhamadou Diop, Mohamed I. Hassan

Abstract:

Molten metallic flow in metallurgical plant is highly turbulent and presents a complex coupling with heat transfer, phase transfer, chemical reaction, momentum transport, etc. Molten silicon flow has significant effect in directional solidification of multicrystalline silicon by affecting the temperature field and the emerging crystallization interface as well as the transport of species and impurities during casting process. Owing to the complexity and limits of reliable measuring techniques, computational models of fluid flow are useful tools to study and quantify these problems. The overall objective of this study is to investigate the potential of a traveling magnetic field for an efficient operating control of the molten metal flow. A multidimensional numerical model will be developed for the calculations of Lorentz force, molten metal flow, and the related phenomenon. The numerical model is implemented in a laboratory-scale silicon crystallization furnace. This study presents the potential of traveling magnetic field approach for an efficient operating control of the molten flow. A numerical model will be used to study the effects of magnetic force applied on the molten flow, and their interdependencies. In this paper, coupled and decoupled, steady and unsteady models of molten flow and crystallization interface will be compared. This study will allow us to retrieve the optimal traveling magnetic field parameter range for crystallization furnaces and the optimal numerical simulations strategy for industrial application.

Keywords: multidimensional, numerical simulation, solidification, multicrystalline, traveling magnetic field

Procedia PDF Downloads 241
593 A Finite Element/Finite Volume Method for Dam-Break Flows over Deformable Beds

Authors: Alia Alghosoun, Ashraf Osman, Mohammed Seaid

Abstract:

A coupled two-layer finite volume/finite element method was proposed for solving dam-break flow problem over deformable beds. The governing equations consist of the well-balanced two-layer shallow water equations for the water flow and a linear elastic model for the bed deformations. Deformations in the topography can be caused by a brutal localized force or simply by a class of sliding displacements on the bathymetry. This deformation in the bed is a source of perturbations, on the water surface generating water waves which propagate with different amplitudes and frequencies. Coupling conditions at the interface are also investigated in the current study and two mesh procedure is proposed for the transfer of information through the interface. In the present work a new procedure is implemented at the soil-water interface using the finite element and two-layer finite volume meshes with a conservative distribution of the forces at their intersections. The finite element method employs quadratic elements in an unstructured triangular mesh and the finite volume method uses the Rusanove to reconstruct the numerical fluxes. The numerical coupled method is highly efficient, accurate, well balanced, and it can handle complex geometries as well as rapidly varying flows. Numerical results are presented for several test examples of dam-break flows over deformable beds. Mesh convergence study is performed for both methods, the overall model provides new insight into the problems at minimal computational cost.

Keywords: dam-break flows, deformable beds, finite element method, finite volume method, hybrid techniques, linear elasticity, shallow water equations

Procedia PDF Downloads 173
592 Eco-Friendly Polymeric Corrosion Inhibitor for Sour Oilfield Environment

Authors: Alireza Rahimi, Abdolreza Farhadian, Arash Tajik, Elaheh Sadeh, Avni Berisha, Esmaeil Akbari Nezhad

Abstract:

Although natural polymers have been shown to have some inhibitory properties on sour corrosion, they are not considered very effective green corrosion inhibitors. Accordingly, effective corrosion inhibitors should be developed based on natural resources to mitigate sour corrosion in the oil and gas industry. Here, Arabic gum was employed as an eco-friendly precursor for the synthesis of innovative polyurethanes designed as highly efficient corrosion inhibitors for sour oilfield solutions. A comprehensive assessment, combining experimental and computational analyses, was conducted to evaluate the inhibitory performance of the inhibitor. Electrochemical measurements demonstrated that a concentration of 200 mM of the inhibitor offered substantial protection to mild steel against sour corrosion, yielding inhibition efficiencies of 98% and 95% at 25 ºC and 60 ºC, respectively. Additionally, the presence of the inhibitor led to a smoother steel surface, indicating the adsorption of polyurethane molecules onto the metal surface. X-ray photoelectron spectroscopy results further validated the chemical adsorption of the inhibitor on mild steel surfaces. Scanning Kelvin probe microscopy revealed a shift in the potential distribution of the steel surface towards negative values, indicating inhibitor adsorption and corrosion process inhibition. Molecular dynamic simulation indicated high adsorption energy values for the inhibitor, suggesting its spontaneous adsorption onto the Fe (110) surface. These findings underscore the potential of Arabic gum as a viable resource for the development of polyurethanes under mild conditions, serving as effective corrosion inhibitors for sour solutions.

Keywords: environmental effect, Arabic gum, corrosion inhibitor, sour corrosion, molecular dynamics simulation

Procedia PDF Downloads 50
591 Recognising the Importance of Smoking Cessation Support in Substance Misuse Patients

Authors: Shaine Mehta, Neelam Parmar, Patrick White, Mark Ashworth

Abstract:

Patients with a history of substance have a high prevalence of comorbidities, including asthma and chronic obstructive pulmonary disease (COPD). Mortality rates are higher than that of the general population and the link to respiratory disease is reported. Randomised controlled trials (RCTs) support opioid substitution therapy as an effective means for harm reduction. However, whilst a high proportion of patients receiving opioid substitution therapy are smokers, to the author’s best knowledge there have been no studies of respiratory disease and smoking intensity in these patients. A cross sectional prevalence study was conducted using an anonymised patient-level database in primary care, Lambeth DataNet (LDN). We included patients aged 18 years and over who had records of ever having been prescribed methadone in primary care. Patients under 18 years old or prescribed buprenorphine (because of uncertainty about the prescribing indication) were excluded. Demographic, smoking, alcohol and asthma and COPD coding data were extracted. Differences between methadone and non-methadone users were explored with multivariable analysis. LDN contained data on 321, 395 patients ≥ 18 years; 676 (0.16%) had a record of methadone prescription. Patients prescribed methadone were more likely to be male (70.7% vs. 50.4%), older (48.9yrs vs. 41.5yrs) and less likely to be from an ethnic minority group (South Asian 2.1% vs. 7.8%; Black African 8.9% vs. 21.4%). Almost all those prescribed methadone were smokers or ex-smokers (97.3% vs. 40.9%); more were non-alcohol drinkers (41.3% vs. 24.3%). We found a high prevalence of COPD (12.4% vs 1.4%) and asthma (14.2% vs 4.4%). Smoking intensity data shows a high prevalence of ≥ 20 cigarettes per day (21.5% vs. 13.1%). Risk of COPD, adjusted for age, gender, ethnicity and deprivation, was raised in smokers: odds ratio 14.81 (95%CI 11.26, 19.47), and in the methadone group: OR 7.51 (95%CI: 5.78, 9.77). Furthermore, after adjustment for smoking intensity (number of cigarettes/day), the risk was raised in methadone group: OR 4.77 (95%CI: 3.13, 7.28). High burden of respiratory disease compounded by the high rates of smoking is a public health concern. This supports an integrated approach to health in patients treated for opiate dependence, with access to smoking cessation support. Further work may evaluate the current structure and commissioning of substance misuse services, including smoking cessation. Regression modelling highlights that methadone as a ‘risk factor’ was independently associated with COPD prevalence, even after adjustment for smoking intensity. This merits further exploration, as the association may be related to unexplored aspects of smoking (such as the number of years smoked) or may be related to other related exposures, such as smoking heroin or crack cocaine.

Keywords: methadone, respiratory disease, smoking cessation, substance misuse

Procedia PDF Downloads 138
590 Efficient Implementation of Finite Volume Multi-Resolution Weno Scheme on Adaptive Cartesian Grids

Authors: Yuchen Yang, Zhenming Wang, Jun Zhu, Ning Zhao

Abstract:

An easy-to-implement and robust finite volume multi-resolution Weighted Essentially Non-Oscillatory (WENO) scheme is proposed on adaptive cartesian grids in this paper. Such a multi-resolution WENO scheme is combined with the ghost cell immersed boundary method (IBM) and wall-function technique to solve Navier-Stokes equations. Unlike the k-exact finite volume WENO schemes which involve large amounts of extra storage, repeatedly solving the matrix generated in a least-square method or the process of calculating optimal linear weights on adaptive cartesian grids, the present methodology only adds very small overhead and can be easily implemented in existing edge-based computational fluid dynamics (CFD) codes with minor modifications. Also, the linear weights of this adaptive finite volume multi-resolution WENO scheme can be any positive numbers on condition that their sum is one. It is a way of bypassing the calculation of the optimal linear weights and such a multi-resolution WENO scheme avoids dealing with the negative linear weights on adaptive cartesian grids. Some benchmark viscous problems are numerical solved to show the efficiency and good performance of this adaptive multi-resolution WENO scheme. Compared with a second-order edge-based method, the presented method can be implemented into an adaptive cartesian grid with slight modification for big Reynolds number problems.

Keywords: adaptive mesh refinement method, finite volume multi-resolution WENO scheme, immersed boundary method, wall-function technique.

Procedia PDF Downloads 143
589 Study on the Effects of Geometrical Parameters of Helical Fins on Heat Transfer Enhancement of Finned Tube Heat Exchangers

Authors: H. Asadi, H. Naderan Tahan

Abstract:

The aim of this paper is to investigate the effect of geometrical properties of helical fins in double pipe heat exchangers. On the other hand, the purpose of this project is to derive the hydraulic and thermal design tables and equations of double heat exchangers with helical fins. The numerical modeling is implemented to calculate the considered parameters. Design tables and correlated equations are generated by repeating the parametric numerical procedure for different fin geometries. Friction factor coefficient and Nusselt number are calculated for different amounts of Reynolds, fluid Prantle and fin twist angles for the range of laminar fluid flow in annular tube with helical fins. Results showed that friction factor coefficient and Nusselt number will be increased for higher Reynolds numbers and fins’ twist angles in general. These two parameters follow different patterns in response to Reynolds number increment. Thermal performance factor is defined to analyze these different patterns. Temperature and velocity contours are plotted against twist angle and number of fins to describe the changes in flow patterns in different geometries of twisted finned annulus. Finally twisted finned annulus friction factor coefficient, Nusselt Number and thermal performance factor are correlated by simulating the model in different design points.

Keywords: double pipe heat exchangers, heat exchanger performance, twisted fins, computational fluid dynamics

Procedia PDF Downloads 281
588 Microwave Synthesis and Molecular Docking Studies of Azetidinone Analogous Bearing Diphenyl Ether Nucleus as a Potent Antimycobacterial and Antiprotozoal Agent

Authors: Vatsal M. Patel, Navin B. Patel

Abstract:

The present studies deal with the developing a series bearing a diphenyl ethers nucleus using structure-based drug design concept. A newer series of diphenyl ether based azetidinone namely N-(3-chloro-2-oxo-4-(3-phenoxyphenyl)azetidin-1-yl)-2-(substituted amino)acetamide (2a-j) have been synthesized by condensation of m-phenoxybenzaldehyde with 2-(substituted-phenylamino)acetohydrazide followed by the cyclisation of resulting Schiff base (1a-j) by conventional method as well as microwave heating approach as a part of an environmentally benign synthetic protocol. All the synthesized compounds were characterized by spectral analysis and were screened for in vitro antimicrobial, antitubercular and antiprotozoal activity. The compound 2f was found to be most active M. tuberculosis (6.25 µM) MIC value in the primary screening as well as this same derivative has been found potency against L. mexicana and T. cruzi with MIC value 2.09 and 6.69 µM comparable to the reference drug Miltefosina and Nifurtimox. To provide understandable evidence to predict binding mode and approximate binding energy of a compound to a target in the terms of ligand-protein interaction, all synthesized compounds were docked against an enoyl-[acyl-carrier-protein] reductase of M. tuberculosis (PDB ID: 4u0j). The computational studies revealed that azetidinone derivatives have a high affinity for the active site of enzyme which provides a strong platform for new structure-based design efforts. The Lipinski’s parameters showed good drug-like properties and can be developed as an oral drug candidate.

Keywords: antimycobacterial, antiprotozoal, azetidinone, diphenylether, docking, microwave

Procedia PDF Downloads 153
587 Scientific Development as Diffusion on a Social Network: An Empirical Case Study

Authors: Anna Keuchenius

Abstract:

Broadly speaking, scientific development is studied in either a qualitative manner with a focus on the behavior and interpretations of academics, such as the sociology of science and science studies or in a quantitative manner with a focus on the analysis of publications, such as scientometrics and bibliometrics. Both come with a different set of methodologies and few cross-references. This paper contributes to the bridging of this divide, by on the on hand approaching the process of scientific progress from a qualitative sociological angle and using on the other hand quantitative and computational techniques. As a case study, we analyze the diffusion of Granovetter's hypothesis from his 1973 paper 'On The Strength of Weak Ties.' A network is constructed of all scientists that have referenced this particular paper, with directed edges to all other researchers that are concurrently referenced with Granovetter's 1973 paper. Studying the structure and growth of this network over time, it is found that Granovetter's hypothesis is used by distinct communities of scientists, each with their own key-narrative into which the hypothesis is fit. The diffusion within the communities shares similarities with the diffusion of an innovation in which innovators, early adopters, and an early-late majority can clearly be distinguished. Furthermore, the network structure shows that each community is clustered around one or few hub scientists that are disproportionately often referenced and seem largely responsible for carrying the hypothesis into their scientific subfield. The larger implication of this case study is that the diffusion of scientific hypotheses and ideas are not the spreading of well-defined objects over a network. Rather, the diffusion is a process in which the object itself dynamically changes in concurrence with its spread. Therefore it is argued that the methodology presented in this paper has potential beyond the scientific domain, in the study of diffusion of other not well-defined objects, such as opinions, behavior, and ideas.

Keywords: diffusion of innovations, network analysis, scientific development, sociology of science

Procedia PDF Downloads 299
586 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function

Procedia PDF Downloads 303
585 Pre-Transformation Phase Reconstruction for Deformation-Induced Transformation in AISI 304 Austenitic Stainless Steel

Authors: Manendra Singh Parihar, Sandip Ghosh Chowdhury

Abstract:

Austenitic stainless steels are widely used and give a good combination of properties. When this steel is plastically deformed, a phase transformation of the metastable Face Centred Cubic Austenite to the stable Body Centred Cubic (α’) or to the Hexagonal close packed (ԑ) martensite may occur, leading to the enhancement in the mechanical properties like strength. The work was based on variant selection and corresponding texture analysis for the strain induced martensitic transformation during deformation of the parent austenite FCC phase to form the product HCP and the BCC martensite phases separately, obeying their respective orientation relationships. The automated method for reconstruction of the parent phase orientation using the EBSD data of the product phase orientation is done using the MATLAB and TSL-OIM software. The method of triplets was used which involves the formation of a triplet of neighboring product grains having a common variant and linking them using a misorientation-based criterion. This led to the proper reconstruction of the pre-transformation phase orientation data and thus to its microstructure and texture. The computational speed of current method is better compared to the previously used methods of reconstruction. The reconstruction of austenite from ԑ and α’ martensite was carried out for multiple samples and their IPF images, pole figures, inverse pole figures and ODFs were compared. Similar type of results was observed for all samples. The comparison gives the idea for estimating the correct sequence of the transformation i.e. γ → ε → α’ or γ → α’, during deformation of AISI 304 austenitic stainless steel.

Keywords: variant selection, reconstruction, EBSD, austenitic stainless steel, martensitic transformation

Procedia PDF Downloads 496
584 Quantification of Lawsone and Adulterants in Commercial Henna Products

Authors: Ruchi B. Semwal, Deepak K. Semwal, Thobile A. N. Nkosi, Alvaro M. Viljoen

Abstract:

The use of Lawsonia inermis L. (Lythraeae), commonly known as henna, has many medicinal benefits and is used as a remedy for the treatment of diarrhoea, cancer, inflammation, headache, jaundice and skin diseases in folk medicine. Although widely used for hair dyeing and temporary tattooing, henna body art has popularized over the last 15 years and changed from being a traditional bridal and festival adornment to an exotic fashion accessory. The naphthoquinone, lawsone, is one of the main constituents of the plant and responsible for its dyeing property. Henna leaves typically contain 1.8–1.9% lawsone, which is used as a marker compound for the quality control of henna products. Adulteration of henna with various toxic chemicals such as p-phenylenediamine, p-methylaminophenol, p-aminobenzene and p-toluenodiamine to produce a variety of colours, is very common and has resulted in serious health problems, including allergic reactions. This study aims to assess the quality of henna products collected from different parts of the world by determining the lawsone content, as well as the concentrations of any adulterants present. Ultra high performance liquid chromatography-mass spectrometry (UPLC-MS) was used to determine the lawsone concentrations in 172 henna products. Separation of the chemical constituents was achieved on an Acquity UPLC BEH C18 column using gradient elution (0.1% formic acid and acetonitrile). The results from UPLC-MS revealed that of 172 henna products, 11 contained 1.0-1.8% lawsone, 110 contained 0.1-0.9% lawsone, whereas 51 samples did not contain detectable levels of lawsone. High performance thin layer chromatography was investigated as a cheaper, more rapid technique for the quality control of henna in relation to the lawsone content. The samples were applied using an automatic TLC Sampler 4 (CAMAG) to pre-coated silica plates, which were subsequently developed with acetic acid, acetone and toluene (0.5: 1.0: 8.5 v/v). A Reprostar 3 digital system allowed the images to be captured. The results obtained corresponded to those from UPLC-MS analysis. Vibrational spectroscopy analysis (MIR or NIR) of the powdered henna, followed by chemometric modelling of the data, indicates that this technique shows promise as an alternative quality control method. Principal component analysis (PCA) was used to investigate the data by observing clustering and identifying outliers. Partial least squares (PLS) multivariate calibration models were constructed for the quantification of lawsone. In conclusion, only a few of the samples analysed contain lawsone in high concentrations, indicating that they are of poor quality. Currently, the presence of adulterants that may have been added to enhance the dyeing properties of the products, is being investigated.

Keywords: Lawsonia inermis, paraphenylenediamine, temporary tattooing, lawsone

Procedia PDF Downloads 454
583 Erosion Modeling of Surface Water Systems for Long Term Simulations

Authors: Devika Nair, Sean Bellairs, Ken Evans

Abstract:

Flow and erosion modeling provides an avenue for simulating the fine suspended sediment in surface water systems like streams and creeks. Fine suspended sediment is highly mobile, and many contaminants that may have been released by any sort of catchment disturbance attach themselves to these sediments. Therefore, a knowledge of fine suspended sediment transport is important in assessing contaminant transport. The CAESAR-Lisflood Landform Evolution Model, which includes a hydrologic model (TOPMODEL) and a hydraulic model (Lisflood), is being used to assess the sediment movement in tropical streams on account of a disturbance in the catchment of the creek and to determine the dynamics of sediment quantity in the creek through the years by simulating the model for future years. The accuracy of future simulations depends on the calibration and validation of the model to the past and present events. Calibration and validation of the model involve finding a combination of parameters of the model, which, when applied and simulated, gives model outputs similar to those observed for the real site scenario for corresponding input data. Calibrating the sediment output of the CAESAR-Lisflood model at the catchment level and using it for studying the equilibrium conditions of the landform is an area yet to be explored. Therefore, the aim of the study was to calibrate the CAESAR-Lisflood model and then validate it so that it could be run for future simulations to study how the landform evolves over time. To achieve this, the model was run for a rainfall event with a set of parameters, plus discharge and sediment data for the input point of the catchment, to analyze how similar the model output would behave when compared with the discharge and sediment data for the output point of the catchment. The model parameters were then adjusted until the model closely approximated the real site values of the catchment. It was then validated by running the model for a different set of events and checking that the model gave similar results to the real site values. The outcomes demonstrated that while the model can be calibrated to a greater extent for hydrology (discharge output) throughout the year, the sediment output calibration may be slightly improved by having the ability to change parameters to take into account the seasonal vegetation growth during the start and end of the wet season. This study is important to assess hydrology and sediment movement in seasonal biomes. The understanding of sediment-associated metal dispersion processes in rivers can be used in a practical way to help river basin managers more effectively control and remediate catchments affected by present and historical metal mining.

Keywords: erosion modelling, fine suspended sediments, hydrology, surface water systems

Procedia PDF Downloads 81
582 The Role of Academic Leaders at Jerash University in Crises Management 'Virus Corona as a Model'

Authors: Khaled M. Hama, Mohammed Al Magableh, Zaid Al Kuri, Ahmad Qayam

Abstract:

The study aimed to identify the role of academic leaders at Jerash University in crisis management from the faculty members' point of view, ‘the emerging Corona pandemic as a model’, as well as to identify the differences in the role of academic leaders at Jerash University in crisis management at the significance level (0.05 ≤ α) according to the study variables Gender Academic rank, years of experience, and identifying proposals that contribute to developing the performance of academic leaders at Jerash University in crisis management, ‘the Corona pandemic as a model’. The study was applied to a randomly selected sample of (72) faculty members at Jerash University, The researcher designed a tool for the study, which is the questionnaire, and it included two parts: the first part related to the personal data of the study sample members, and the second part was divided into five areas and (34) paragraphs to reveal the role of academic leaders at Jerash University in crisis management - the Corona pandemic as a model, it was confirmed From the validity and reliability of the tool, the study used the descriptive analytical method The study reached the following results: that the role of academic leaders at Jerash University in crisis management from the point of view of faculty members, ‘the emerging corona pandemic as a model’, came to a high degree, and there were no statistically significant differences at the level of statistical significance (α = 0.05) between the computational circles for the estimates of individuals The study sample for the role of academic leaders at Jerash University in crisis management is attributed to the study variables (gender, academic rank, and years of experience)

Keywords: academic leaders, crisis management, corona pandemic, Jerash University

Procedia PDF Downloads 46
581 AER Model: An Integrated Artificial Society Modeling Method for Cloud Manufacturing Service Economic System

Authors: Deyu Zhou, Xiao Xue, Lizhen Cui

Abstract:

With the increasing collaboration among various services and the growing complexity of user demands, there are more and more factors affecting the stable development of the cloud manufacturing service economic system (CMSE). This poses new challenges to the evolution analysis of the CMSE. Many researchers have modeled and analyzed the evolution process of CMSE from the perspectives of individual learning and internal factors influencing the system, but without considering other important characteristics of the system's individuals (such as heterogeneity, bounded rationality, etc.) and the impact of external environmental factors. Therefore, this paper proposes an integrated artificial social model for the cloud manufacturing service economic system, which considers both the characteristics of the system's individuals and the internal and external influencing factors of the system. The model consists of three parts: the Agent model, environment model, and rules model (Agent-Environment-Rules, AER): (1) the Agent model considers important features of the individuals, such as heterogeneity and bounded rationality, based on the adaptive behavior mechanisms of perception, action, and decision-making; (2) the environment model describes the activity space of the individuals (real or virtual environment); (3) the rules model, as the driving force of system evolution, describes the mechanism of the entire system's operation and evolution. Finally, this paper verifies the effectiveness of the AER model through computational and experimental results.

Keywords: cloud manufacturing service economic system (CMSE), AER model, artificial social modeling, integrated framework, computing experiment, agent-based modeling, social networks

Procedia PDF Downloads 72
580 Analysis of Exploitation Damages of the Frame Scaffolding

Authors: A. Robak, M. Pieńko, E. Błazik-Borowa, J. Bęc, I. Szer

Abstract:

The analyzes and classifications presented in the article were based on the research carried out in year 2016 and 2017 on a group of nearly one hundred scaffoldings assembled and used on construction sites in different parts of Poland. During scaffolding selection process efforts were made to maintain diversification in terms of parameters such as scaffolding size, investment size, type of investment, location and nature of conducted works. This resulted in the research being carried out on scaffoldings used for church renovation in a small town or attached to the facades of classic apartment blocks, as well as on scaffoldings used during construction of skyscrapers or facilities of the largest power plants. This variety allows to formulate general conclusions about the technical condition of used frame scaffoldings. Exploitation damages of the frame scaffolding elements were divided into three groups. The first group includes damages to the main structural components, which reduce the strength of the scaffolding elements and hence the whole structure. The qualitative analysis of these damages was made on the basis of numerical models that take into account the geometry of the damage and on the basis of computational nonlinear static analyzes. The second group focuses on exploitation damages such as the lack of a pin on the guardrail bolt which may cause an imminent threat to people using scaffolding. These are local damages that do not affect the bearing capacity and stability of the whole structure but are very important for safe use. The last group consider damages that reduce only aesthetic values and do not have direct impact on bearing capacity and safety of use. Apart from qualitative analyzes the article will present quantitative analyzes showing how frequently given type of damage occurs.

Keywords: scaffolding, damage, safety, numerical analysis

Procedia PDF Downloads 253
579 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems

Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber

Abstract:

Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.

Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement

Procedia PDF Downloads 146
578 Artificial Intelligence in Bioscience: The Next Frontier

Authors: Parthiban Srinivasan

Abstract:

With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.

Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction

Procedia PDF Downloads 354
577 A Validated Estimation Method to Predict the Interior Wall of Residential Buildings Based on Easy to Collect Variables

Authors: B. Gepts, E. Meex, E. Nuyts, E. Knaepen, G. Verbeeck

Abstract:

The importance of resource efficiency and environmental impact assessment has raised the interest in knowing the amount of materials used in buildings. If no BIM model or energy performance certificate is available, material quantities can be obtained through an estimation or time-consuming calculation. For the interior wall area, no validated estimation method exists. However, in the case of environmental impact assessment or evaluating the existing building stock as future material banks, knowledge of the material quantities used in interior walls is indispensable. This paper presents a validated method for the estimation of the interior wall area for dwellings based on easy-to-collect building characteristics. A database of 4963 residential buildings spread all over Belgium is used. The data are collected through onsite measurements of the buildings during the construction phase (between mid-2010 and mid-2017). The interior wall area refers to the area of all interior walls in the building, including the inner leaf of exterior (party) walls, minus the area of windows and doors, unless mentioned otherwise. The two predictive modelling techniques used are 1) a (stepwise) linear regression and 2) a decision tree. The best estimation method is selected based on the best R² k-fold (5) fit. The research shows that the building volume is by far the most important variable to estimate the interior wall area. A stepwise regression based on building volume per building, building typology, and type of house provides the best fit, with R² k-fold (5) = 0.88. Although the best R² k-fold value is obtained when the other parameters ‘building typology’ and ‘type of house’ are included, the contribution of these variables can be seen as statistically significant but practically irrelevant. Thus, if these parameters are not available, a simplified estimation method based on only the volume of the building can also be applied (R² k-fold = 0.87). The robustness and precision of the method (output) are validated three times. Firstly, the prediction of the interior wall area is checked by means of alternative calculations of the building volume and of the interior wall area; thus, other definitions are applied to the same data. Secondly, the output is tested on an extension of the database, so it has the same definitions but on other data. Thirdly, the output is checked on an unrelated database with other definitions and other data. The validation of the estimation methods demonstrates that the methods remain accurate when underlying data are changed. The method can support environmental as well as economic dimensions of impact assessment, as it can be used in early design. As it allows the prediction of the amount of interior wall materials to be produced in the future or that might become available after demolition, the presented estimation method can be part of material flow analyses on input and on output.

Keywords: buildings as material banks, building stock, estimation method, interior wall area

Procedia PDF Downloads 19
576 A World Map of Seabed Sediment Based on 50 Years of Knowledge

Authors: T. Garlan, I. Gabelotaud, S. Lucas, E. Marchès

Abstract:

Production of a global sedimentological seabed map has been initiated in 1995 to provide the necessary tool for searches of aircraft and boats lost at sea, to give sedimentary information for nautical charts, and to provide input data for acoustic propagation modelling. This original approach had already been initiated one century ago when the French hydrographic service and the University of Nancy had produced maps of the distribution of marine sediments of the French coasts and then sediment maps of the continental shelves of Europe and North America. The current map of the sediment of oceans presented was initiated with a UNESCO's general map of the deep ocean floor. This map was adapted using a unique sediment classification to present all types of sediments: from beaches to the deep seabed and from glacial deposits to tropical sediments. In order to allow good visualization and to be adapted to the different applications, only the granularity of sediments is represented. The published seabed maps are studied, if they present an interest, the nature of the seabed is extracted from them, the sediment classification is transcribed and the resulted map is integrated in the world map. Data come also from interpretations of Multibeam Echo Sounder (MES) imagery of large hydrographic surveys of deep-ocean. These allow a very high-quality mapping of areas that until then were represented as homogeneous. The third and principal source of data comes from the integration of regional maps produced specifically for this project. These regional maps are carried out using all the bathymetric and sedimentary data of a region. This step makes it possible to produce a regional synthesis map, with the realization of generalizations in the case of over-precise data. 86 regional maps of the Atlantic Ocean, the Mediterranean Sea, and the Indian Ocean have been produced and integrated into the world sedimentary map. This work is permanent and permits a digital version every two years, with the integration of some new maps. This article describes the choices made in terms of sediment classification, the scale of source data and the zonation of the variability of the quality. This map is the final step in a system comprising the Shom Sedimentary Database, enriched by more than one million punctual and surface items of data, and four series of coastal seabed maps at 1:10,000, 1:50,000, 1:200,000 and 1:1,000,000. This step by step approach makes it possible to take into account the progresses in knowledge made in the field of seabed characterization during the last decades. Thus, the arrival of new classification systems for seafloor has improved the recent seabed maps, and the compilation of these new maps with those previously published allows a gradual enrichment of the world sedimentary map. But there is still a lot of work to enhance some regions, which are still based on data acquired more than half a century ago.

Keywords: marine sedimentology, seabed map, sediment classification, world ocean

Procedia PDF Downloads 229
575 Effect of Depth on Texture Features of Ultrasound Images

Authors: M. A. Alqahtani, D. P. Coleman, N. D. Pugh, L. D. M. Nokes

Abstract:

In diagnostic ultrasound, the echo graphic B-scan texture is an important area of investigation since it can be analyzed to characterize the histological state of internal tissues. An important factor requiring consideration when evaluating ultrasonic tissue texture is the depth. The effect of attenuation with depth of ultrasound, the size of the region of interest, gain, and dynamic range are important variables to consider as they can influence the analysis of texture features. These sources of variability have to be considered carefully when evaluating image texture as different settings might influence the resultant image. The aim of this study is to investigate the effect of depth on the texture features in-vivo using a 3D ultrasound probe. The left leg medial head of the gastrocnemius muscle of 10 healthy subjects were scanned. Two regions A and B were defined at different depth within the gastrocnemius muscle boundary. The size of both ROI’s was 280*20 pixels and the distance between region A and B was kept constant at 5 mm. Texture parameters include gray level, variance, skewness, kurtosis, co-occurrence matrix; run length matrix, gradient, autoregressive (AR) model and wavelet transform were extracted from the images. The paired t –test was used to test the depth effect for the normally distributed data and the Wilcoxon–Mann-Whitney test was used for the non-normally distributed data. The gray level, variance, and run length matrix were significantly lowered when the depth increased. The other texture parameters showed similar values at different depth. All the texture parameters showed no significant difference between depths A and B (p > 0.05) except for gray level, variance and run length matrix (p < 0.05). This indicates that gray level, variance, and run length matrix are depth dependent.

Keywords: ultrasound image, texture parameters, computational biology, biomedical engineering

Procedia PDF Downloads 288
574 Conflict Resolution in Fuzzy Rule Base Systems Using Temporal Modalities Inference

Authors: Nasser S. Shebka

Abstract:

Fuzzy logic is used in complex adaptive systems where classical tools of representing knowledge are unproductive. Nevertheless, the incorporation of fuzzy logic, as it’s the case with all artificial intelligence tools, raised some inconsistencies and limitations in dealing with increased complexity systems and rules that apply to real-life situations and hinders the ability of the inference process of such systems, but it also faces some inconsistencies between inferences generated fuzzy rules of complex or imprecise knowledge-based systems. The use of fuzzy logic enhanced the capability of knowledge representation in such applications that requires fuzzy representation of truth values or similar multi-value constant parameters derived from multi-valued logic, which set the basis for the three t-norms and their based connectives which are actually continuous functions and any other continuous t-norm can be described as an ordinal sum of these three basic ones. However, some of the attempts to solve this dilemma were an alteration to fuzzy logic by means of non-monotonic logic, which is used to deal with the defeasible inference of expert systems reasoning, for example, to allow for inference retraction upon additional data. However, even the introduction of non-monotonic fuzzy reasoning faces a major issue of conflict resolution for which many principles were introduced, such as; the specificity principle and the weakest link principle. The aim of our work is to improve the logical representation and functional modelling of AI systems by presenting a method of resolving existing and potential rule conflicts by representing temporal modalities within defeasible inference rule-based systems. Our paper investigates the possibility of resolving fuzzy rules conflict in a non-monotonic fuzzy reasoning-based system by introducing temporal modalities and Kripke's general weak modal logic operators in order to expand its knowledge representation capabilities by means of flexibility in classifying newly generated rules, and hence, resolving potential conflicts between these fuzzy rules. We were able to address the aforementioned problem of our investigation by restructuring the inference process of the fuzzy rule-based system. This is achieved by using time-branching temporal logic in combination with restricted first-order logic quantifiers, as well as propositional logic to represent classical temporal modality operators. The resulting findings not only enhance the flexibility of complex rule-base systems inference process but contributes to the fundamental methods of building rule bases in such a manner that will allow for a wider range of applicable real-life situations derived from a quantitative and qualitative knowledge representational perspective.

Keywords: fuzzy rule-based systems, fuzzy tense inference, intelligent systems, temporal modalities

Procedia PDF Downloads 85
573 Heat Sink Optimization for a High Power Wearable Thermoelectric Module

Authors: Zohreh Soleimani, Sally Salome Shahzad, Stamatis Zoras

Abstract:

As a result of current energy and environmental issues, the human body is known as one of the promising candidate for converting wasted heat to electricity (Seebeck effect). Thermoelectric generator (TEG) is one of the most prevalent means of harvesting body heat and converting that to eco-friendly electrical power. However, the uneven distribution of the body heat and its curvature geometry restrict harvesting adequate amount of energy. To perfectly transform the heat radiated by the body into power, the most direct solution is conforming the thermoelectric generators (TEG) with the arbitrary surface of the body and increase the temperature difference across the thermoelectric legs. Due to this, a computational survey through COMSOL Multiphysics is presented in this paper with the main focus on the impact of integrating a flexible wearable TEG with a corrugated shaped heat sink on the module power output. To eliminate external parameters (temperature, air flow, humidity), the simulations are conducted within indoor thermal level and when the wearer is stationary. The full thermoelectric characterization of the proposed TEG fabricated by a wavy shape heat sink has been computed leading to a maximum power output of 25µW/cm2 at a temperature gradient nearly 13°C. It is noteworthy that for the flexibility of the proposed TEG and heat sink, the applicability and efficiency of the module stay high even on the curved surfaces of the body. As a consequence, the results demonstrate the superiority of such a TEG to the most state of the art counterparts fabricated with no heat sink and offer a new train of thought for the development of self-sustained and unobtrusive wearable power suppliers which generate energy from low grade dissipated heat from the body.

Keywords: device simulation, flexible thermoelectric module, heat sink, human body heat

Procedia PDF Downloads 149