Search results for: computational calculations
271 Investigation of Heat Conduction through Particulate Filled Polymer Composite
Authors: Alok Agrawal, Alok Satapathy
Abstract:
In this paper, an attempt to determine the effective thermal conductivity (keff) of particulate filled polymer composites using finite element method (FEM) a powerful computational technique is made. A commercially available finite element package ANSYS is used for this numerical analysis. Three-dimensional spheres-in-cube lattice array models are constructed to simulate the microstructures of micro-sized particulate filled polymer composites with filler content ranging from 2.35 to 26.8 vol %. Based on the temperature profiles across the composite body, the keff of each composition is estimated theoretically by FEM. Composites with similar filler contents are than fabricated using compression molding technique by reinforcing micro-sized aluminium oxide (Al2O3) in polypropylene (PP) resin. Thermal conductivities of these composite samples are measured according to the ASTM standard E-1530 by using the Unitherm™ Model 2022 tester, which operates on the double guarded heat flow principle. The experimentally measured conductivity values are compared with the numerical values and also with those obtained from existing empirical models. This comparison reveals that the FEM simulated values are found to be in reasonable good agreement with the experimental data. Values obtained from the theoretical model proposed by the authors are also found to be in even closer approximation with the measured values within percolation limit. Further, this study shows that there is gradual enhancement in the conductivity of PP resin with increase in filler percentage and thereby its heat conduction capability is improved. It is noticed that with addition of 26.8 vol % of filler, the keff of composite increases to around 6.3 times that of neat PP. This study validates the proposed model for PP-Al2O3 composite system and proves that finite element analysis can be an excellent methodology for such investigations. With such improved heat conduction ability, these composites can find potential applications in micro-electronics, printed circuit boards, encapsulations etc.Keywords: analytical modelling, effective thermal conductivity, finite element method, polymer matrix composite
Procedia PDF Downloads 322270 Impacts on Marine Ecosystems Using a Multilayer Network Approach
Authors: Nelson F. F. Ebecken, Gilberto C. Pereira, Lucio P. de Andrade
Abstract:
Bays, estuaries and coastal ecosystems are some of the most used and threatened natural systems globally. Its deterioration is due to intense and increasing human activities. This paper aims to monitor the socio-ecological in Brazil, model and simulate it through a multilayer network representing a DPSIR structure (Drivers, Pressures, States-Impacts-Responses) considering the concept of Management based on Ecosystems to support decision-making under the National/State/Municipal Coastal Management policy. This approach considers several interferences and can represent a significant advance in several scientific aspects. The main objective of this paper is the coupling of three different types of complex networks, the first being an ecological network, the second a social network, and the third a network of economic activities, in order to model the marine ecosystem. Multilayer networks comprise two or more "layers", which may represent different types of interactions, different communities, different points in time, and so on. The dependency between layers results from processes that affect the various layers. For example, the dispersion of individuals between two patches affects the network structure of both samples. A multilayer network consists of (i) a set of physical nodes representing entities (e.g., species, people, companies); (ii) a set of layers, which may include multiple layering aspects (e.g., time dependency and multiple types of relationships); (iii) a set of state nodes, each of which corresponds to the manifestation of a given physical node in a layer-specific; and (iv) a set of edges (weighted or not) to connect the state nodes among themselves. The edge set includes the intralayer edges familiar and interlayer ones, which connect state nodes between layers. The applied methodology in an existent case uses the Flow cytometry process and the modeling of ecological relationships (trophic and non-trophic) following fuzzy theory concepts and graph visualization. The identification of subnetworks in the fuzzy graphs is carried out using a specific computational method. This methodology allows considering the influence of different factors and helps their contributions to the decision-making process.Keywords: marine ecosystems, complex systems, multilayer network, ecosystems management
Procedia PDF Downloads 113269 CFD-DEM Modelling of Liquid Fluidizations of Ellipsoidal Particles
Authors: Esmaeil Abbaszadeh Molaei, Zongyan Zhou, Aibing Yu
Abstract:
The applications of liquid fluidizations have been increased in many parts of industries such as particle classification, backwashing of granular filters, crystal growth, leaching and washing, and bioreactors due to high-efficient liquid–solid contact, favorable mass and heat transfer, high operation flexibilities, and reduced back mixing of phases. In most of these multiphase operations the particles properties, i.e. size, density, and shape, may change during the process because of attrition, coalescence or chemical reactions. Previous studies, either experimentally or numerically, mainly have focused on studies of liquid-solid fluidized beds containing spherical particles; however, the role of particle shape on the hydrodynamics of liquid fluidized beds is still not well-known. A three-dimensional Discrete Element Model (DEM) and Computational Fluid Dynamics (CFD) are coupled to study the influence of particles shape on particles and liquid flow patterns in liquid-solid fluidized beds. In the simulations, ellipsoid particles are used to study the shape factor since they can represent a wide range of particles shape from oblate and sphere to prolate shape particles. Different particle shapes from oblate (disk shape) to elongated particles (rod shape) are selected to investigate the effect of aspect ratio on different flow characteristics such as general particles and liquid flow pattern, pressure drop, and particles orientation. First, the model is verified based on experimental observations, then further detail analyses are made. It was found that spherical particles showed a uniform particle distribution in the bed, which resulted in uniform pressure drop along the bed height. However for particles with aspect ratios less than one (disk-shape), some particles were carried into the freeboard region, and the interface between the bed and freeboard was not easy to be determined. A few particle also intended to leave the bed. On the other hand, prolate particles showed different behaviour in the bed. They caused unstable interface and some flow channeling was observed for low liquid velocities. Because of the non-uniform particles flow pattern for particles with aspect ratios lower (oblate) and more (prolate) than one, the pressure drop distribution in the bed was not observed as uniform as what was found for spherical particles.Keywords: CFD, DEM, ellipsoid, fluidization, multiphase flow, non-spherical, simulation
Procedia PDF Downloads 310268 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters
Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran
Abstract:
The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.Keywords: electric propulsion, mass gauging, propellant, PVT, xenon
Procedia PDF Downloads 345267 Critical Parameters of a Square-Well Fluid
Authors: Hamza Javar Magnier, Leslie V. Woodcock
Abstract:
We report extensive molecular dynamics (MD) computational investigations into the thermodynamic description of supercritical properties for a model fluid that is the simplest realistic representation of atoms or molecules. The pair potential is a hard-sphere repulsion of diameter σ with a very short attraction of length λσ. When λ = 1.005 the range is so short that the model atoms are referred to as “adhesive spheres”. Molecular dimers, trimers …etc. up to large clusters, or droplets, of many adhesive-sphere atoms are unambiguously defined. This then defines percolation transitions at the molecular level that bound the existence of gas and liquid phases at supercritical temperatures, and which define the existence of a supercritical mesophase. Both liquid and gas phases are seen to terminate at the loci of percolation transitions, and below a second characteristic temperature (Tc2) are separated by the supercritical mesophase. An analysis of the distribution of clusters in gas, meso- and liquid phases confirms the colloidal nature of this mesophase. The general phase behaviour is compared with both experimental properties of the water-steam supercritical region and also with formally exact cluster theory of Mayer and Mayer. Both are found to be consistent with the present findings that in this system the supercritical mesophase narrows in density with increasing T > Tc and terminates at a higher Tc2 at a confluence of the primary percolation loci. The expended plot of the MD data points in the mesophase of 7 critical and supercritical isotherms in highlight this narrowing in density of the linear-slope region of the mesophase as temperature is increased above the critical. This linearity in the mesophase implies the existence of a linear combination rule between gas and liquid which is an extension of the Lever rule in the subcritical region, and can be used to obtain critical parameters without resorting to experimental data in the two-phase region. Using this combination rule, the calculated critical parameters Tc = 0.2007 and Pc = 0.0278 are found be agree with the values found by of Largo and coworkers. The properties of this supercritical mesophase are shown to be consistent with an alternative description of the phenomenon of critical opalescence seen in the supercritical region of both molecular and colloidal-protein supercritical fluids.Keywords: critical opalescence, supercritical, square-well, percolation transition, critical parameters.
Procedia PDF Downloads 522266 Advancements in Mathematical Modeling and Optimization for Control, Signal Processing, and Energy Systems
Authors: Zahid Ullah, Atlas Khan
Abstract:
This abstract focuses on the advancements in mathematical modeling and optimization techniques that play a crucial role in enhancing the efficiency, reliability, and performance of these systems. In this era of rapidly evolving technology, mathematical modeling and optimization offer powerful tools to tackle the complex challenges faced by control, signal processing, and energy systems. This abstract presents the latest research and developments in mathematical methodologies, encompassing areas such as control theory, system identification, signal processing algorithms, and energy optimization. The abstract highlights the interdisciplinary nature of mathematical modeling and optimization, showcasing their applications in a wide range of domains, including power systems, communication networks, industrial automation, and renewable energy. It explores key mathematical techniques, such as linear and nonlinear programming, convex optimization, stochastic modeling, and numerical algorithms, that enable the design, analysis, and optimization of complex control and signal processing systems. Furthermore, the abstract emphasizes the importance of addressing real-world challenges in control, signal processing, and energy systems through innovative mathematical approaches. It discusses the integration of mathematical models with data-driven approaches, machine learning, and artificial intelligence to enhance system performance, adaptability, and decision-making capabilities. The abstract also underscores the significance of bridging the gap between theoretical advancements and practical applications. It recognizes the need for practical implementation of mathematical models and optimization algorithms in real-world systems, considering factors such as scalability, computational efficiency, and robustness. In summary, this abstract showcases the advancements in mathematical modeling and optimization techniques for control, signal processing, and energy systems. It highlights the interdisciplinary nature of these techniques, their applications across various domains, and their potential to address real-world challenges. The abstract emphasizes the importance of practical implementation and integration with emerging technologies to drive innovation and improve the performance of control, signal processing, and energy.Keywords: mathematical modeling, optimization, control systems, signal processing, energy systems, interdisciplinary applications, system identification, numerical algorithms
Procedia PDF Downloads 112265 Simulation of Elastic Bodies through Discrete Element Method, Coupled with a Nested Overlapping Grid Fluid Flow Solver
Authors: Paolo Sassi, Jorge Freiria, Gabriel Usera
Abstract:
In this work, a finite volume fluid flow solver is coupled with a discrete element method module for the simulation of the dynamics of free and elastic bodies in interaction with the fluid and between themselves. The open source fluid flow solver, caffa3d.MBRi, includes the capability to work with nested overlapping grids in order to easily refine the grid in the region where the bodies are moving. To do so, it is necessary to implement a recognition function able to identify the specific mesh block in which the device is moving in. The set of overlapping finer grids might be displaced along with the set of bodies being simulated. The interaction between the bodies and the fluid is computed through a two-way coupling. The velocity field of the fluid is first interpolated to determine the drag force on each object. After solving the objects displacements, subject to the elastic bonding among them, the force is applied back onto the fluid through a Gaussian smoothing considering the cells near the position of each object. The fishnet is represented as lumped masses connected by elastic lines. The internal forces are derived from the elasticity of these lines, and the external forces are due to drag, gravity, buoyancy and the load acting on each element of the system. When solving the ordinary differential equations system, that represents the motion of the elastic and flexible bodies, it was found that the Runge Kutta solver of fourth order is the best tool in terms of performance, but requires a finer grid than the fluid solver to make the system converge, which demands greater computing power. The coupled solver is demonstrated by simulating the interaction between the fluid, an elastic fishnet and a set of free bodies being captured by the net as they are dragged by the fluid. The deformation of the net, as well as the wake produced in the fluid stream are well captured by the method, without requiring the fluid solver mesh to adapt for the evolving geometry. Application of the same strategy to the simulation of elastic structures subject to the action of wind is also possible with the method presented, and one such application is currently under development.Keywords: computational fluid dynamics, discrete element method, fishnets, nested overlapping grids
Procedia PDF Downloads 416264 Contribution of Artificial Intelligence in the Studies of Natural Compounds Against SARS-COV-2
Authors: Salah Belaidi
Abstract:
We have carried out extensive and in-depth research to search for bioactive compounds based on Algerian plants. A selection of 50 ligands from Algerian medicinal plants. Several compounds used in herbal medicine have been drawn using Marvin Sketch software. We determined the three-dimensional structures of the ligands with the MMFF94 force field in order to prepare these ligands for molecular docking. The 3D protein structure of the SARS-CoV-2 main protease was taken from the Protein Data Bank. We used AutoDockVina software to apply molecular docking. The hydrogen atoms were added during the molecular docking process, and all the twist bonds of the ligands were added using the (ligand) module in the AutoDock software. The COVID-19 main protease (Mpro) is a key enzyme that plays a vital role in viral transcription and mediating replication, so it is a very attractive drug target for SARS-CoV-2. In this work, an evaluation was carried out on the biologically active compounds present in these selected medicinal plants as effective inhibitors of the protease enzyme of COVID-19, with an in-depth computational calculation of the molecular docking using the Autodock Vina software. The top 7 ligands: Phloroglucinol, Afzelin, Myricetin-3-O- rutinosidTricin 7-neohesperidoside, Silybin, Silychristinthat and Kaempferol are selected among the 50 molecules studied which are Algerian medicinal plants, whose selection is based on the best binding energy which is relatively low compared to the reference molecule with binding affinities of -9.3, -9.3, -9, -8.9, -8 .5, 8.3 and -8.3 kcal mol-1 respectively. Then, we analyzed the ADME properties of the best7 ligands using the web server SwissADME. Two ligands (Silybin, Silychristin) were found to be potential candidates for the discovery and design of novel drug inhibitors of the protease enzyme of SARS-CoV-2. The stability of the two ligands in complexing with the Mpro protease was validated by molecular dynamics simulation; they revealed a stable trajectory in both techniques, RMSD and RMSF, by showing molecular properties with coherent interactions in molecular dynamics simulations. Finally, we conclude that the Silybin ligand forms a more stable complex with the Mpro protease compared to the Silychristin ligand.Keywords: COVID-19, medicinal plants, molecular docking, ADME properties, molecular dynamics
Procedia PDF Downloads 36263 Computational Approach to Cyclin-Dependent Kinase 2 Inhibitors Design and Analysis: Merging Quantitative Structure-Activity Relationship, Absorption, Distribution, Metabolism, Excretion, and Toxicity, Molecular Docking, and Molecular Dynamics Simulations
Authors: Mohamed Moussaoui, Mouna Baassi, Soukayna Baammi, Hatim Soufi, Mohammed Salah, Rachid Daoud, Achraf EL Allali, Mohammed Elalaoui Belghiti, Said Belaaouad
Abstract:
The present study aims to investigate the quantitative structure-activity relationship (QSAR) of a series of Thiazole derivatives reported as anticancer agents (hepatocellular carcinoma), using principally the electronic descriptors calculated by the density functional theory (DFT) method and by applying the multiple linear regression method. The developed model showed good statistical parameters (R²= 0.725, R²ₐ𝒹ⱼ= 0.653, MSE = 0.060, R²ₜₑₛₜ= 0.827, Q²𝒸ᵥ = 0.536). The energy of the highest occupied molecular orbital (EHOMO) orbital, electronic energy (TE), shape coefficient (I), number of rotatable bonds (NROT), and index of refraction (n) were revealed to be the main descriptors influencing the anti-cancer activity. Additional Thiazole derivatives were then designed and their activities and pharmacokinetic properties were predicted using the validated QSAR model. These designed molecules underwent evaluation through molecular docking (MD) and molecular dynamic (MD) simulations, with binding affinity calculated using the MMPBSA script according to a 100 ns simulation trajectory. This process aimed to study both their affinity and stability towards Cyclin-Dependent Kinase 2 (CDK2), a target protein for cancer disease treatment. The research concluded by identifying four CDK2 inhibitors - A1, A3, A5, and A6 - displaying satisfactory pharmacokinetic properties. MDs results indicated that the designed compound A5 remained stable in the active center of the CDK2 protein, suggesting its potential as an effective inhibitor for the treatment of hepatocellular carcinoma. The findings of this study could contribute significantly to the development of effective CDK2 inhibitors.Keywords: QSAR, ADMET, Thiazole, anticancer, molecular docking, molecular dynamic simulations, MMPBSA calculation
Procedia PDF Downloads 107262 Slosh Investigations on a Spacecraft Propellant Tank for Control Stability Studies
Authors: Sarath Chandran Nair S, Srinivas Kodati, Vasudevan R, Asraff A. K
Abstract:
Spacecrafts generally employ liquid propulsion for their attitude and orbital maneuvers or raising it from geo-transfer orbit to geosynchronous orbit. Liquid propulsion systems use either mono-propellant or bi-propellants for generating thrust. These propellants are generally stored in either spherical tanks or cylindrical tanks with spherical end domes. The propellant tanks are provided with a propellant acquisition system/propellant management device along with vanes and their conical mounting structure to ensure propellant availability in the outlet for thrust generation even under a low/zero-gravity environment. Slosh is the free surface oscillations in partially filled containers under external disturbances. In a spacecraft, these can be due to control forces and due to varying acceleration. Knowledge of slosh and its effect due to internals is essential for understanding its stability through control stability studies. It is mathematically represented by a pendulum-mass model. It requires parameters such as slosh frequency, damping, sloshes mass and its location, etc. This paper enumerates various numerical and experimental methods used for evaluating the slosh parameters required for representing slosh. Numerical methods like finite element methods based on linear velocity potential theory and computational fluid dynamics based on Reynolds Averaged Navier Stokes equations are used for the detailed evaluation of slosh behavior in one of the spacecraft propellant tanks used in an Indian space mission. Experimental studies carried out on a scaled-down model are also discussed. Slosh parameters evaluated by different methods matched very well and finalized their dispersion bands based on experimental studies. It is observed that the presence of internals such as propellant management devices, including conical support structure, alters slosh parameters. These internals also offers one order higher damping compared to viscous/ smooth wall damping. It is an advantage factor for the stability of slosh. These slosh parameters are given for establishing slosh margins through control stability studies and finalize the spacecraft control system design.Keywords: control stability, propellant tanks, slosh, spacecraft, slosh spacecraft
Procedia PDF Downloads 245261 Cysteine Proteases of Plants That Act on the Coagulation Cascade: Approach from Bioinformatics
Authors: Tapiwa Brine Mutsauri
Abstract:
The MEROPS system is an information resource for proteases that classifies them into clans according to their catalytic type. Within the Plant kingdom, cysteine proteases are one of the best known, as they are the catalytic type on which the first studies on plant proteases were focused. Plant cysteine proteases have a similar mechanism of action to serine proteases, and some are known to have activity on factors of the blood coagulation cascade, such as a potent antithrombotic effect, and also cause increased fibrinolysis. Of a few plant cysteine proteases, the three-dimensional structure is known, so a method of interest to be able to predict their potential activity on the factors of the coagulation cascade would be to know their structure. Phylogenetics is the study of the evolutionary relationships between biological entities, often species, individuals, or genes (which can be called taxa). It is essential to identify the evolutionary position and the possible distribution of these enzymes in the plant kingdom, particularly those that act on coagulation factors. Bioinformatic tools, such as Clustal Omega / Jalview and Mega6, can be used to create phylogenetic trees. From the results of the alignment, it can be seen that although there is a certain degree of conservation (Conservation) and consensus (Consensus) among the eleven sequences, the functionally important motifs (those corresponding to the active site), the degree of conservation and consensus is very low. We could then infer that although activity on coagulation is reported for these enzymes, linked to their structural and mechanistic similarity with serine proteases, this activity may not have a direct or primary relationship with the proteolytic activity associated with their common, poorly conserved active site in this case. This ultimately could be related to modifications in the reaction mechanism of several of the enzymes studied, which would require more detailed study. Also, below, we will deal with factors that may have a greater influence on this result. The results of this work enrich the understanding of how species (and molecular sequences in general) evolve. Through phylogenetics, we learn not only how sequences came to be the way they are today but also the general principles that allow us to predict how they will change in the future. For pharmaceutical sciences, phylogenetic selection of biologically related species can help identify closely related members of a species with compounds of pharmacological importance, such as plant cysteine proteases, in addition to identifying structural features that may influence their pharmacological activity and which can be valuable for drug design.Keywords: computational simulation, proteases, coagulation, bioinformatics
Procedia PDF Downloads 19260 Numerical Investigation of Effect of Throat Design on the Performance of a Rectangular Ramjet Intake
Authors: Subrat Partha Sarathi Pattnaik, Rajan N.K.S.
Abstract:
Integrated rocket ramjet engines are highly suitable for long range missile applications. Designing the fixed geometry intakes for such missiles that can operate efficiently over a range of operating conditions is a highly challenging task. Hence, the present study aims to evaluate the effect of throat design on the performance of a rectangular mixed compression intake for operation in the Mach number range of 1.8 – 2.5. The analysis has been carried out at four different Mach numbers of 1.8, 2, 2.2, 2.5 and two angle-of-attacks of +5 and +10 degrees. For the throat design, three different throat heights have been considered, one corresponding to a 3- external shock design and two heights corresponding to a 2-external shock design leading to different internal contraction ratios. The on-design Mach number for the study is M 2.2. To obtain the viscous flow field in the intake, the theoretical designs have been considered for computational fluid dynamic analysis. For which Favre averaged Navier- Stokes (FANS) equations with two equation SST k-w model have been solved. The analysis shows that for zero angle of attack at on-design and high off-design Mach number operations the three-ramp design leads to a higher total pressure recovery (TPR) compared to the two-ramp design at both contraction ratios maintaining same mass flow ratio (MFR). But at low off-design Mach numbers the total pressure shows an opposite trend that is maximum for the two-ramp low contraction ratio design due to lower shock loss across the external shocks similarly the MFR is higher for low contraction ratio design as the external ramp shocks move closer to the cowl. At both the angle of attack conditions and complete range of Mach numbers the total pressure recovery and mass flow ratios are highest for two ramp low contraction design due to lower stagnation pressure loss across the detached bow shock formed at the ramp and lower mass spillage. Hence, low contraction design is found to be suitable for higher off-design performance.Keywords: internal contraction ratio, mass flow ratio, mixed compression intake, performance, supersonic flows
Procedia PDF Downloads 108259 In Silico Screening, Identification and Validation of Cryptosporidium hominis Hypothetical Protein and Virtual Screening of Inhibitors as Therapeutics
Authors: Arpit Kumar Shrivastava, Subrat Kumar, Rajani Kanta Mohapatra, Priyadarshi Soumyaranjan Sahu
Abstract:
Computational approaches to predict structure, function and other biological characteristics of proteins are becoming more common in comparison to the traditional methods in drug discovery. Cryptosporidiosis is a major zoonotic diarrheal disease particularly in children, which is caused primarily by Cryptosporidium hominis and Cryptosporidium parvum. Currently, there are no vaccines for cryptosporidiosis and recommended drugs are not effective. With the availability of complete genome sequence of C. hominis, new targets have been recognized for the development of effective and better drugs and/or vaccines. We identified a unique hypothetical epitopic protein in C. hominis genome through BLASTP analysis. A 3D model of the hypothetical protein was generated using I-Tasser server through threading methodology. The quality of the model was validated through Ramachandran plot by PROCHECK server. The functional annotation of the hypothetical protein through DALI server revealed structural similarity with human Transportin 3. Phylogenetic analysis for this hypothetical protein also showed C. hominis hypothetical protein (CUV04613) was the closely related to human transportin 3 protein. The 3D protein model is further subjected to virtual screening study with inhibitors from the Zinc Database by using Dock Blaster software. Docking study reported N-(3-chlorobenzyl) ethane-1,2-diamine as the best inhibitor in terms of docking score. Docking analysis elucidated that Leu 525, Ile 526, Glu 528, Glu 529 are critical residues for ligand–receptor interactions. The molecular dynamic simulation was done to access the reliability of the binding pose of inhibitor and protein complex using GROMACS software at 10ns time point. Trajectories were analyzed at each 2.5 ns time interval, among which, H-bond with LEU-525 and GLY- 530 are significantly present in MD trajectories. Furthermore, antigenic determinants of the protein were determined with the help of DNA Star software. Our study findings showed a great potential in order to provide insights in the development of new drug(s) or vaccine(s) for control as well as prevention of cryptosporidiosis among humans and animals.Keywords: cryptosporidium hominis, hypothetical protein, molecular docking, molecular dynamics simulation
Procedia PDF Downloads 365258 A Semantic and Concise Structure to Represent Human Actions
Authors: Tobias Strübing, Fatemeh Ziaeetabar
Abstract:
Humans usually manipulate objects with their hands. To represent these actions in a simple and understandable way, we need to use a semantic framework. For this purpose, the Semantic Event Chain (SEC) method has already been presented which is done by consideration of touching and non-touching relations between manipulated objects in a scene. This method was improved by a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of static (e.g. top, bottom) and dynamic spatial relations (e.g. moving apart, getting closer) between objects in an action scene. This leads to a better action prediction as well as the ability to distinguish between more actions. Each eSEC manipulation descriptor is a huge matrix with thirty rows and a massive set of the spatial relations between each pair of manipulated objects. The current eSEC framework has so far only been used in the category of manipulation actions, which eventually involve two hands. Here, we would like to extend this approach to a whole body action descriptor and make a conjoint activity representation structure. For this purpose, we need to do a statistical analysis to modify the current eSEC by summarizing while preserving its features, and introduce a new version called Enhanced eSEC or (e2SEC). This summarization can be done from two points of the view: 1) reducing the number of rows in an eSEC matrix, 2) shrinking the set of possible semantic spatial relations. To achieve these, we computed the importance of each matrix row in an statistical way, to see if it is possible to remove a particular one while all manipulations are still distinguishable from each other. On the other hand, we examined which semantic spatial relations can be merged without compromising the unity of the predefined manipulation actions. Therefore by performing the above analyses, we made the new e2SEC framework which has 20% fewer rows, 16.7% less static spatial and 11.1% less dynamic spatial relations. This simplification, while preserving the salient features of a semantic structure in representing actions, has a tremendous impact on the recognition and prediction of complex actions, as well as the interactions between humans and robots. It also creates a comprehensive platform to integrate with the body limbs descriptors and dramatically increases system performance, especially in complex real time applications such as human-robot interaction prediction.Keywords: enriched semantic event chain, semantic action representation, spatial relations, statistical analysis
Procedia PDF Downloads 126257 Development of an Implicit Coupled Partitioned Model for the Prediction of the Behavior of a Flexible Slender Shaped Membrane in Interaction with Free Surface Flow under the Influence of a Moving Flotsam
Authors: Mahtab Makaremi Masouleh, Günter Wozniak
Abstract:
This research is part of an interdisciplinary project, promoting the design of a light temporary installable textile defence system against flood. In case river water levels increase abruptly especially in winter time, one can expect massive extra load on a textile protective structure in term of impact as a result of floating debris and even tree trunks. Estimation of this impulsive force on such structures is of a great importance, as it can ensure the reliability of the design in critical cases. This fact provides the motivation for the numerical analysis of a fluid structure interaction application, comprising flexible slender shaped and free-surface water flow, where an accelerated heavy flotsam tends to approach the membrane. In this context, the analysis on both the behavior of the flexible membrane and its interaction with moving flotsam is conducted by finite elements based solvers of the explicit solver and implicit Abacus solver available as products of SIMULIA software. On the other hand, a study on how free surface water flow behaves in response to moving structures, has been investigated using the finite volume solver of Star CCM+ from Siemens PLM Software. An automatic communication tool (CSE, SIMULIA Co-Simulation Engine) and the implementation of an effective partitioned strategy in form of an implicit coupling algorithm makes it possible for partitioned domains to be interconnected powerfully. The applied procedure ensures stability and convergence in the solution of these complicated issues, albeit with high computational cost; however, the other complexity of this study stems from mesh criterion in the fluid domain, where the two structures approach each other. This contribution presents the approaches for the establishment of a convergent numerical solution and compares the results with experimental findings.Keywords: co-simulation, flexible thin structure, fluid-structure interaction, implicit coupling algorithm, moving flotsam
Procedia PDF Downloads 389256 Development of a Decision Model to Optimize Total Cost in Food Supply Chain
Authors: Henry Lau, Dilupa Nakandala, Li Zhao
Abstract:
All along the length of the supply chain, fresh food firms face the challenge of managing both product quality, due to the perishable nature of the products, and product cost. This paper develops a method to assist logistics managers upstream in the fresh food supply chain in making cost optimized decisions regarding transportation, with the objective of minimizing the total cost while maintaining the quality of food products above acceptable levels. Considering the case of multiple fresh food products collected from multiple farms being transported to a warehouse or a retailer, this study develops a total cost model that includes various costs incurred during transportation. The practical application of the model is illustrated by using several computational intelligence approaches including Genetic Algorithms (GA), Fuzzy Genetic Algorithms (FGA) as well as an improved Simulated Annealing (SA) procedure applied with a repair mechanism for efficiency benchmarking. We demonstrate the practical viability of these approaches by using a simulation study based on pertinent data and evaluate the simulation outcomes. The application of the proposed total cost model was demonstrated using three approaches of GA, FGA and SA with a repair mechanism. All three approaches are adoptable; however, based on the performance evaluation, it was evident that the FGA is more likely to produce a better performance than the other two approaches of GA and SA. This study provides a pragmatic approach for supporting logistics and supply chain practitioners in fresh food industry in making important decisions on the arrangements and procedures related to the transportation of multiple fresh food products to a warehouse from multiple farms in a cost-effective way without compromising product quality. This study extends the literature on cold supply chain management by investigating cost and quality optimization in a multi-product scenario from farms to a retailer and, minimizing cost by managing the quality above expected quality levels at delivery. The scalability of the proposed generic function enables the application to alternative situations in practice such as different storage environments and transportation conditions.Keywords: cost optimization, food supply chain, fuzzy sets, genetic algorithms, product quality, transportation
Procedia PDF Downloads 223255 Geometric, Energetic and Topological Analysis of (Ethanol)₉-Water Heterodecamers
Authors: Jennifer Cuellar, Angie L. Parada, Kevin N. S. Chacon, Sol M. Mejia
Abstract:
The purification of bio-ethanol through distillation methods is an unresolved issue at the biofuel industry because of the ethanol-water azeotrope formation, which increases the steps of the purification process and subsequently increases the production costs. Therefore, understanding the mixture nature at the molecular level could provide new insights for improving the current methods and/or designing new and more efficient purification methods. For that reason, the present study focuses on the evaluation and analysis of (ethanol)₉-water heterodecamers, as the systems with the minimum molecular proportion that represents the azeotropic concentration (96 %m/m in ethanol). The computational modelling was carried out with B3LYP-D3/6-311++G(d,p) in Gaussian 09. Initial explorations of the potential energy surface were done through two methods: annealing simulated runs and molecular dynamics trajectories besides intuitive structures obtained from smaller (ethanol)n-water heteroclusters, n = 7, 8 and 9. The energetic order of the seven stable heterodecamers determines the most stable heterodecamer (Hdec-1) as a structure forming a bicyclic geometry with the O-H---O hydrogen bonds (HBs) where the water is a double proton donor molecule. Hdec-1 combines 1 water molecule and the same quantity of every ethanol conformer; this is, 3 trans, 3 gauche 1 and 3 gauche 2; its abundance is 89%, its decamerization energy is -80.4 kcal/mol, i.e. 13 kcal/mol most stable than the less stable heterodecamer. Besides, a way to understand why methanol does not form an azeotropic mixture with water, analogous systems ((ethanol)10, (methanol)10, and (methanol)9-water)) were optimized. Topologic analysis of the electron density reveals that Hec-1 forms 33 weak interactions in total: 11 O-H---O, 8 C-H---O, 2 C-H---C hydrogen bonds and 12 H---H interactions. The strength and abundance of the most unconventional interactions (H---H, C-H---O and C-H---O) seem to explain the preference of the ethanol for forming heteroclusters instead of clusters. Besides, O-H---O HBs present a significant covalent character according to topologic parameters as the Laplacian of electron density and the relationship between potential and kinetic energy densities evaluated at the bond critical points; obtaining negatives values and values between 1 and 2, for those two topological parameters, respectively.Keywords: ADMP, DFT, ethanol-water azeotrope, Grimme dispersion correction, simulated annealing, weak interactions
Procedia PDF Downloads 103254 Risk and Reliability Based Probabilistic Structural Analysis of Railroad Subgrade Using Finite Element Analysis
Authors: Asif Arshid, Ying Huang, Denver Tolliver
Abstract:
Finite Element (FE) method coupled with ever-increasing computational powers has substantially advanced the reliability of deterministic three dimensional structural analyses of a structure with uniform material properties. However, railways trackbed is made up of diverse group of materials including steel, wood, rock and soil, while each material has its own varying levels of heterogeneity and imperfections. It is observed that the application of probabilistic methods for trackbed structural analysis while incorporating the material and geometric variabilities is deeply underworked. The authors developed and validated a 3-dimensional FE based numerical trackbed model and in this study, they investigated the influence of variability in Young modulus and thicknesses of granular layers (Ballast and Subgrade) on the reliability index (-index) of the subgrade layer. The influence of these factors is accounted for by changing their Coefficients of Variance (COV) while keeping their means constant. These variations are formulated using Gaussian Normal distribution. Two failure mechanisms in subgrade namely Progressive Shear Failure and Excessive Plastic Deformation are examined. Preliminary results of risk-based probabilistic analysis for Progressive Shear Failure revealed that the variations in Ballast depth are the most influential factor for vertical stress at the top of subgrade surface. Whereas, in case of Excessive Plastic Deformations in subgrade layer, the variations in its own depth and Young modulus proved to be most important while ballast properties remained almost indifferent. For both these failure moods, it is also observed that the reliability index for subgrade failure increases with the increase in COV of ballast depth and subgrade Young modulus. The findings of this work is of particular significance in studying the combined effect of construction imperfections and variations in ground conditions on the structural performance of railroad trackbed and evaluating the associated risk involved. In addition, it also provides an additional tool to supplement the deterministic analysis procedures and decision making for railroad maintenance.Keywords: finite element analysis, numerical modeling, probabilistic methods, risk and reliability analysis, subgrade
Procedia PDF Downloads 139253 An International Curriculum Development for Languages and Technology
Authors: Miguel Nino
Abstract:
When considering the challenges of a changing and demanding globalizing world, it is important to reflect on how university students will be prepared for the realities of internationalization, marketization and intercultural conversation. The present study is an interdisciplinary program designed to respond to the needs of the global community. The proposal bridges the humanities and science through three different fields: Languages, graphic design and computer science, specifically, fundamentals of programming such as python, java script and software animation. Therefore, the goal of the four year program is twofold: First, enable students for intercultural communication between English and other languages such as Spanish, Mandarin, French or German. Second, students will acquire knowledge in practical software and relevant employable skills to collaborate in assisted computer projects that most probable will require essential programing background in interpreted or compiled languages. In order to become inclusive and constructivist, the cognitive linguistics approach is suggested for the three different fields, particularly for languages that rely on the traditional method of repetition. This methodology will help students develop their creativity and encourage them to become independent problem solving individuals, as languages enhance their common ground of interaction for culture and technology. Participants in this course of study will be evaluated in their second language acquisition at the Intermediate-High level. For graphic design and computer science students will apply their creative digital skills, as well as their critical thinking skills learned from the cognitive linguistics approach, to collaborate on a group project design to find solutions for media web design problems or marketing experimentation for a company or the community. It is understood that it will be necessary to apply programming knowledge and skills to deliver the final product. In conclusion, the program equips students with linguistics knowledge and skills to be competent in intercultural communication, where English, the lingua franca, remains the medium for marketing and product delivery. In addition to their employability, students can expand their knowledge and skills in digital humanities, computational linguistics, or increase their portfolio in advertising and marketing. These students will be the global human capital for the competitive globalizing community.Keywords: curriculum, international, languages, technology
Procedia PDF Downloads 443252 Expert Supporting System for Diagnosing Lymphoid Neoplasms Using Probabilistic Decision Tree Algorithm and Immunohistochemistry Profile Database
Authors: Yosep Chong, Yejin Kim, Jingyun Choi, Hwanjo Yu, Eun Jung Lee, Chang Suk Kang
Abstract:
For the past decades, immunohistochemistry (IHC) has been playing an important role in the diagnosis of human neoplasms, by helping pathologists to make a clearer decision on differential diagnosis, subtyping, personalized treatment plan, and finally prognosis prediction. However, the IHC performed in various tumors of daily practice often shows conflicting and very challenging results to interpret. Even comprehensive diagnosis synthesizing clinical, histologic and immunohistochemical findings can be helpless in some twisted cases. Another important issue is that the IHC data is increasing exponentially and more and more information have to be taken into account. For this reason, we reached an idea to develop an expert supporting system to help pathologists to make a better decision in diagnosing human neoplasms with IHC results. We gave probabilistic decision tree algorithm and tested the algorithm with real case data of lymphoid neoplasms, in which the IHC profile is more important to make a proper diagnosis than other human neoplasms. We designed probabilistic decision tree based on Bayesian theorem, program computational process using MATLAB (The MathWorks, Inc., USA) and prepared IHC profile database (about 104 disease category and 88 IHC antibodies) based on WHO classification by reviewing the literature. The initial probability of each neoplasm was set with the epidemiologic data of lymphoid neoplasm in Korea. With the IHC results of 131 patients sequentially selected, top three presumptive diagnoses for each case were made and compared with the original diagnoses. After the review of the data, 124 out of 131 were used for final analysis. As a result, the presumptive diagnoses were concordant with the original diagnoses in 118 cases (93.7%). The major reason of discordant cases was that the similarity of the IHC profile between two or three different neoplasms. The expert supporting system algorithm presented in this study is in its elementary stage and need more optimization using more advanced technology such as deep-learning with data of real cases, especially in differentiating T-cell lymphomas. Although it needs more refinement, it may be used to aid pathological decision making in future. A further application to determine IHC antibodies for a certain subset of differential diagnoses might be possible in near future.Keywords: database, expert supporting system, immunohistochemistry, probabilistic decision tree
Procedia PDF Downloads 224251 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 118250 A Fourier Method for Risk Quantification and Allocation of Credit Portfolios
Authors: Xiaoyu Shen, Fang Fang, Chujun Qiu
Abstract:
Herewith we present a Fourier method for credit risk quantification and allocation in the factor-copula model framework. The key insight is that, compared to directly computing the cumulative distribution function of the portfolio loss via Monte Carlo simulation, it is, in fact, more efficient to calculate the transformation of the distribution function in the Fourier domain instead and inverting back to the real domain can be done in just one step and semi-analytically, thanks to the popular COS method (with some adjustments). We also show that the Euler risk allocation problem can be solved in the same way since it can be transformed into the problem of evaluating a conditional cumulative distribution function. Once the conditional or unconditional cumulative distribution function is known, one can easily calculate various risk metrics. The proposed method not only fills the niche in literature, to the best of our knowledge, of accurate numerical methods for risk allocation but may also serve as a much faster alternative to the Monte Carlo simulation method for risk quantification in general. It can cope with various factor-copula model choices, which we demonstrate via examples of a two-factor Gaussian copula and a two-factor Gaussian-t hybrid copula. The fast error convergence is proved mathematically and then verified by numerical experiments, in which Value-at-Risk, Expected Shortfall, and conditional Expected Shortfall are taken as examples of commonly used risk metrics. The calculation speed and accuracy are tested to be significantly superior to the MC simulation for real-sized portfolios. The computational complexity is, by design, primarily driven by the number of factors instead of the number of obligors, as in the case of Monte Carlo simulation. The limitation of this method lies in the "curse of dimension" that is intrinsic to multi-dimensional numerical integration, which, however, can be relaxed with the help of dimension reduction techniques and/or parallel computing, as we will demonstrate in a separate paper. The potential application of this method has a wide range: from credit derivatives pricing to economic capital calculation of the banking book, default risk charge and incremental risk charge computation of the trading book, and even to other risk types than credit risk.Keywords: credit portfolio, risk allocation, factor copula model, the COS method, Fourier method
Procedia PDF Downloads 168249 Selenuranes as Cysteine Protease Inhibitors: Theorical Investigation on Model Systems
Authors: Gabriela D. Silva, Rodrigo L. O. R. Cunha, Mauricio D. Coutinho-Neto
Abstract:
In the last four decades the biological activities of selenium compounds has received great attention, particularly for hypervalent derivates from selenium (IV) used as enzyme inhibitors. The unregulated activity of cysteine proteases are related to the development of several pathologies, such as neurological disorders, cardiovascular diseases, obesity, rheumatoid arthritis, cancer and parasitic infections. These enzymes are therefore a valuable target for designing new small molecule inhibitors such as selenuranes. Even tough there has been advances in the synthesis and design of new selenuranes based inhibitors, little is known about their mechanism of action. It is a given that inhibition occurs through the reaction between the thiol group of the enzyme and the chalcogen atom. However, several open questions remain about the nature of the mechanism (associative vs. dissociative) and about the nature of the reactive species in solution under physiological conditions. In this work we performed a theoretical investigation on model systems to study the possible routes of substitution reactions. Nucleophiles may be present in biological systems, our interest is centered in the thiol groups from the cysteine proteases and the hydroxyls from the aqueous environment. We therefore expect this study to clarify the possibility of a route reaction in two stages, the first consisting of the substitution of chloro atoms by hydroxyl groups and then replacing these hydroxyl groups per thiol groups in selenuranes. The structures of selenuranes and nucleophiles were optimized using density function theory along the B3LYP functional and a 6-311+G(d) basis set. Solvent was treated using the IEFPCM method as implemented in the Gaussian 09 code. Our results indicate that hydrolysis from water react preferably with selenuranes, and then, they are replaced by the thiol group. It show the energy values of -106,0730423 kcal/mol for dople substituition by hydroxyl group and 96,63078511 kcal/mol for thiol group. The solvatation and pH reduction promotes this route, increasing the energy value for reaction with hydroxil group to -50,75637672 kcal/mol and decreasing the energy value for thiol to 7,917767189 kcal/mol. Alternative ways were analyzed for monosubstitution (considering the competition between Cl, OH and SH groups) and they suggest the same route. Similar results were obtained for aliphatic and aromatic selenuranes studied.Keywords: chalcogenes, computational study, cysteine proteases, enzyme inhibitors
Procedia PDF Downloads 304248 ANSYS FLUENT Simulation of Natural Convection and Radiation in a Solar Enclosure
Authors: Sireetorn Kuharat, Anwar Beg
Abstract:
In this study, multi-mode heat transfer characteristics of spacecraft solar collectors are investigated computationally. Two-dimensional steady-state incompressible laminar Newtonian viscous convection-radiative heat transfer in a rectangular solar collector geometry. The ANSYS FLUENT finite volume code (version 17.2) is employed to simulate the thermo-fluid characteristics. Several radiative transfer models are employed which are available in the ANSYS workbench, including the classical Rosseland flux model and the more elegant P1 flux model. Mesh-independence tests are conducted. Validation of the simulations is conducted with a computational Harlow-Welch MAC (Marker and Cell) finite difference method and excellent correlation. The influence of aspect ratio, Prandtl number (Pr), Rayleigh number (Ra) and radiative flux model on temperature, isotherms, velocity, the pressure is evaluated and visualized in color plots. Additionally, the local convective heat flux is computed and solutions are compared with the MAC solver for various buoyancy effects (e.g. Ra = 10,000,000) achieving excellent agreement. The P1 model is shown to better predict the actual influence of solar radiative flux on thermal fluid behavior compared with the limited Rosseland model. With increasing Rayleigh numbers the hot zone emanating from the base of the collector is found to penetrate deeper into the collector and rises symmetrically dividing into two vortex regions with very high buoyancy effect (Ra >100,000). With increasing Prandtl number (three gas cases are examined respectively hydrogen gas mixture, air and ammonia gas) there is also a progressive incursion of the hot zone at the solar collector base higher into the solar collector space and simultaneously a greater asymmetric behavior of the dual isothermal zones. With increasing aspect ratio (wider base relative to the height of the solar collector geometry) there is a greater thermal convection pattern around the whole geometry, higher temperatures and the elimination of the cold upper zone associated with lower aspect ratio.Keywords: thermal convection, radiative heat transfer, solar collector, Rayleigh number
Procedia PDF Downloads 119247 Recursion, Merge and Event Sequence: A Bio-Mathematical Perspective
Authors: Noury Bakrim
Abstract:
Formalization is indeed a foundational Mathematical Linguistics as demonstrated by the pioneering works. While dialoguing with this frame, we nonetheless propone, in our approach of language as a real object, a mathematical linguistics/biosemiotics defined as a dialectical synthesis between induction and computational deduction. Therefore, relying on the parametric interaction of cycles, rules, and features giving way to a sub-hypothetic biological point of view, we first hypothesize a factorial equation as an explanatory principle within Category Mathematics of the Ergobrain: our computation proposal of Universal Grammar rules per cycle or a scalar determination (multiplying right/left columns of the determinant matrix and right/left columns of the logarithmic matrix) of the transformable matrix for rule addition/deletion and cycles within representational mapping/cycle heredity basing on the factorial example, being the logarithmic exponent or power of rule deletion/addition. It enables us to propone an extension of minimalist merge/label notions to a Language Merge (as a computing principle) within cycle recursion relying on combinatorial mapping of rules hierarchies on external Entax of the Event Sequence. Therefore, to define combinatorial maps as language merge of features and combinatorial hierarchical restrictions (governing, commanding, and other rules), we secondly hypothesize from our results feature/hierarchy exponentiation on graph representation deriving from Gromov's Symbolic Dynamics where combinatorial vertices from Fe are set to combinatorial vertices of Hie and edges from Fe to Hie such as for all combinatorial group, there are restriction maps representing different derivational levels that are subgraphs: the intersection on I defines pullbacks and deletion rules (under restriction maps) then under disjunction edges H such that for the combinatorial map P belonging to Hie exponentiation by intersection there are pullbacks and projections that are equal to restriction maps RM₁ and RM₂. The model will draw on experimental biomathematics as well as structural frames with focus on Amazigh and English (cases from phonology/micro-semantics, Syntax) shift from Structure to event (especially Amazigh formant principle resolving its morphological heterogeneity).Keywords: rule/cycle addition/deletion, bio-mathematical methodology, general merge calculation, feature exponentiation, combinatorial maps, event sequence
Procedia PDF Downloads 128246 A Local Tensor Clustering Algorithm to Annotate Uncharacterized Genes with Many Biological Networks
Authors: Paul Shize Li, Frank Alber
Abstract:
A fundamental task of clinical genomics is to unravel the functions of genes and their associations with disorders. Although experimental biology has made efforts to discover and elucidate the molecular mechanisms of individual genes in the past decades, still about 40% of human genes have unknown functions, not to mention the diseases they may be related to. For those biologists who are interested in a particular gene with unknown functions, a powerful computational method tailored for inferring the functions and disease relevance of uncharacterized genes is strongly needed. Studies have shown that genes strongly linked to each other in multiple biological networks are more likely to have similar functions. This indicates that the densely connected subgraphs in multiple biological networks are useful in the functional and phenotypic annotation of uncharacterized genes. Therefore, in this work, we have developed an integrative network approach to identify the frequent local clusters, which are defined as those densely connected subgraphs that frequently occur in multiple biological networks and consist of the query gene that has few or no disease or function annotations. This is a local clustering algorithm that models multiple biological networks sharing the same gene set as a three-dimensional matrix, the so-called tensor, and employs the tensor-based optimization method to efficiently find the frequent local clusters. Specifically, massive public gene expression data sets that comprehensively cover dynamic, physiological, and environmental conditions are used to generate hundreds of gene co-expression networks. By integrating these gene co-expression networks, for a given uncharacterized gene that is of biologist’s interest, the proposed method can be applied to identify the frequent local clusters that consist of this uncharacterized gene. Finally, those frequent local clusters are used for function and disease annotation of this uncharacterized gene. This local tensor clustering algorithm outperformed the competing tensor-based algorithm in both module discovery and running time. We also demonstrated the use of the proposed method on real data of hundreds of gene co-expression data and showed that it can comprehensively characterize the query gene. Therefore, this study provides a new tool for annotating the uncharacterized genes and has great potential to assist clinical genomic diagnostics.Keywords: local tensor clustering, query gene, gene co-expression network, gene annotation
Procedia PDF Downloads 168245 Detecting Tomato Flowers in Greenhouses Using Computer Vision
Authors: Dor Oppenheim, Yael Edan, Guy Shani
Abstract:
This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.Keywords: agricultural engineering, image processing, computer vision, flower detection
Procedia PDF Downloads 329244 Effects of Front Porch and Loft on Indoor Ventilation in the Renewal of Beijing Courtyard
Authors: Zhongzhong Zeng, Zichen Liang
Abstract:
In recent years, Beijing courtyards have been facing the problem of renewal and renovation, and the residents are faced with the problems of small house areas, large household sizes, old and dangerous houses, etc. Among the many renovation methods, the authors note two more common practices of using the front porch to expand the floor area and adding a loft. Residents and architects, however, did not give the ventilation performance of the significant interior consideration before beginning the remodeling. The aim of this article is to explore the good or negative impacts of both front porch and loft structures on the manner of interior ventilation in the courtyard. Ventilation, in turn, is crucial to the indoor environmental quality of a home. The major method utilized in this study is the comparative analysis method, in which the authors create four alternative house models with or without a front porch and an attic as two variables and examine internal ventilation using the CFD(Computational Fluid Dynamics) technique. The authors compare the indoor ventilation of four different architectural models with or without front porches and lofts as two variables. The results obtained from the analysis of the sectional airflow and the plane 1.5m height cloud are the existence of the loft, to a certain extent, disrupts the airflow organization of the building and makes the rear wall high windows of the building less effective. Occupying the front porch to become the area of the house has no significant effect on ventilation, but try not to occupy the front porch and add the loft at the same time in the building renovation. The findings of this study led to the following recommendations: strive to preserve the courtyard building's original architectural design and make adjustments to only the inappropriate elements or constructions. The ventilation in the loft portion is inadequate, and the inhabitants typically use the loft as a living area. This may lead to the building relying more on air conditioning in the summer, which would raise energy demand. The front porch serves as a transition place as well as a source of shade, weather protection, and inside ventilation. In conclusion, the examination of interior environments in upcoming studies should concentrate on cross-disciplinary, multi-angle, and multi-level research topics.Keywords: Beijing courtyard renewal, CFD, indoor environment, ventilation analysis
Procedia PDF Downloads 81243 Development of Numerical Method for Mass Transfer across the Moving Membrane with Selective Permeability: Approximation of the Membrane Shape by Level Set Method for Numerical Integral
Authors: Suguru Miyauchi, Toshiyuki Hayase
Abstract:
Biological membranes have selective permeability, and the capsules or cells enclosed by the membrane show the deformation by the osmotic flow. This mass transport phenomenon is observed everywhere in a living body. For the understanding of the mass transfer in a body, it is necessary to consider the mass transfer phenomenon across the membrane as well as the deformation of the membrane by a flow. To our knowledge, in the numerical analysis, the method for mass transfer across the moving membrane has not been established due to the difficulty of the treating of the mass flux permeating through the moving membrane with selective permeability. In the existing methods for the mass transfer across the membrane, the approximate delta function is used to communicate the quantities on the interface. The methods can reproduce the permeation of the solute, but cannot reproduce the non-permeation. Moreover, the computational accuracy decreases with decreasing of the permeable coefficient of the membrane. This study aims to develop the numerical method capable of treating three-dimensional problems of mass transfer across the moving flexible membrane. One of the authors developed the numerical method with high accuracy based on the finite element method. This method can capture the discontinuity on the membrane sharply due to the consideration of the jumps in concentration and concentration gradient in the finite element discretization. The formulation of the method takes into account the membrane movement, and both permeable and non-permeable membranes can be treated. However, searching the cross points of the membrane and fluid element boundaries and splitting the fluid element into sub-elements are needed for the numerical integral. Therefore, cumbersome operation is required for a three-dimensional problem. In this paper, we proposed an improved method to avoid the search and split operations, and confirmed its effectiveness. The membrane shape was treated implicitly by introducing the level set function. As the construction of the level set function, the membrane shape in one fluid element was expressed by the shape function of the finite element method. By the numerical experiment, it was found that the shape function with third order appropriately reproduces the membrane shapes. The same level of accuracy compared with the previous method using search and split operations was achieved by using a number of sampling points of the numerical integral. The effectiveness of the method was confirmed by solving several model problems.Keywords: finite element method, level set method, mass transfer, membrane permeability
Procedia PDF Downloads 250242 A Lightweight Blockchain: Enhancing Internet of Things Driven Smart Buildings Scalability and Access Control Using Intelligent Direct Acyclic Graph Architecture and Smart Contracts
Authors: Syed Irfan Raza Naqvi, Zheng Jiangbin, Ahmad Moshin, Pervez Akhter
Abstract:
Currently, the IoT system depends on a centralized client-servant architecture that causes various scalability and privacy vulnerabilities. Distributed ledger technology (DLT) introduces a set of opportunities for the IoT, which leads to practical ideas for existing components at all levels of existing architectures. Blockchain Technology (BCT) appears to be one approach to solving several IoT problems, like Bitcoin (BTC) and Ethereum, which offer multiple possibilities. Besides, IoTs are resource-constrained devices with insufficient capacity and computational overhead to process blockchain consensus mechanisms; the traditional BCT existing challenge for IoTs is poor scalability, energy efficiency, and transaction fees. IOTA is a distributed ledger based on Direct Acyclic Graph (DAG) that ensures M2M micro-transactions are free of charge. IOTA has the potential to address existing IoT-related difficulties such as infrastructure scalability, privacy and access control mechanisms. We proposed an architecture, SLDBI: A Scalable, lightweight DAG-based Blockchain Design for Intelligent IoT Systems, which adapts the DAG base Tangle and implements a lightweight message data model to address the IoT limitations. It enables the smooth integration of new IoT devices into a variety of apps. SLDBI enables comprehensive access control, energy efficiency, and scalability in IoT ecosystems by utilizing the Masked Authentication Message (MAM) protocol and the IOTA Smart Contract Protocol (ISCP). Furthermore, we suggest proof-of-work (PoW) computation on the full node in an energy-efficient way. Experiments have been carried out to show the capability of a tangle to achieve better scalability while maintaining energy efficiency. The findings show user access control management at granularity levels and ensure scale up to massive networks with thousands of IoT nodes, such as Smart Connected Buildings (SCBDs).Keywords: blockchain, IOT, direct acyclic graphy, scalability, access control, architecture, smart contract, smart connected buildings
Procedia PDF Downloads 122