Search results for: dynamic modelling
3450 Exergy Analysis and Evaluation of the Different Flowsheeting Configurations for CO₂ Capture Plant Using 2-Amino-2-Methyl-1-Propanol
Authors: Ebuwa Osagie, Vasilije Manovic
Abstract:
Exergy analysis provides the identification of the location, sources of thermodynamic inefficiencies, and magnitude in a thermal system. Thus, both the qualitative and quantitative assessment can be evaluated with exergy, unlike energy which is based on quantitative assessment only. The main purpose of exergy analysis is to identify where exergy is destroyed. Thus, reduction of the exergy destruction and losses associated with the capture plant systems can improve work potential. Furthermore, thermodynamic analysis of different configurations of the process helps to identify opportunities for reducing the steam requirements for each of the configurations. This paper presents steady-state simulation and exergy analysis of the 2-amino-2-methyl-1-propanol (AMP)-based post-combustion capture (PCC) plant. Exergy analysis performed for the AMP-based plant and the different configurations revealed that the rich split with intercooling configuration gave the highest exergy efficiency of 73.6%, while that of the intercooling and the reference AMP-based plant were 57.3% and 55.8% respectively.Keywords: 2-amino-2-methyl-1-propanol, modelling, and simulation, post-combustion capture plant, exergy analysis, flowsheeting configurations
Procedia PDF Downloads 1643449 Key Factors Influencing Individual Knowledge Capability in KIFs
Authors: Salman Iqbal
Abstract:
Knowledge management (KM) literature has mainly focused on the antecedents of KM. The purpose of this study is to investigate the effect of specific human resource management (HRM) practices on employee knowledge sharing and its outcome as individual knowledge capability. Based on previous literature, a model is proposed for the study and hypotheses are formulated. The cross-sectional dataset comes from a sample of 19 knowledge intensive firms (KIFs). This study has run an item parceling technique followed by Confirmatory Factor Analysis (CFA) on the latent constructs of the research model. Employees’ collaboration and their interpersonal trust can help to improve their knowledge sharing behaviour and knowledge capability within organisations. This study suggests that in future, by using a larger sample, better statistical insight is possible. The findings of this study are beneficial for scholars, policy makers and practitioners. The empirical results of this study are entirely based on employees’ perceptions and make a significant research contribution, given there is a dearth of empirical research focusing on the subcontinent.Keywords: employees’ collaboration, individual knowledge capability, knowledge sharing, monetary rewards, structural equation modelling
Procedia PDF Downloads 2753448 Assessment of the High-Speed Ice Friction of Bob Skeleton Runners
Authors: Agata Tomaszewska, Timothy Kamps, Stephan R. Turnock, Nicola Symonds
Abstract:
Bob skeleton is a highly competitive sport in which an athlete reaches speeds up to 40 m/s sliding, head first, down an ice track. It is believed that the friction between the runners and ice significantly contributes to the amount of the total energy loss during a bob skeleton descent. There is only limited available experimental data regarding the friction of bob skeleton runners or indeed steel on the ice at high sliding speeds ( > 20 m/s). Testing methods used to investigate the friction of steel on ice in winter sports have been outlined, and their accuracy and repeatability discussed. A system thinking approach was used to investigate the runner-ice interaction during sliding and create concept designs of three ice tribometers. The operational envelope of the bob skeleton system has been defined through mathematical modelling. Designs of a drum, linear and inertia pin-on-disk tribometers were developed specifically for bob skeleton runner testing with the requirement of reaching up to 40 m/s speed and facilitate fresh ice sliding. The design constraints have been outline and the proposed solutions compared based on the ease of operation, accuracy and the development cost.Keywords: bob skeleton, ice friction, high-speed tribometers, sliding friction
Procedia PDF Downloads 2613447 Lotus Mechanism: Validation of Deployment Mechanism Using Structural and Dynamic Analysis
Authors: Parth Prajapati, A. R. Srinivas
Abstract:
The purpose of this paper is to validate the concept of the Lotus Mechanism using Computer Aided Engineering (CAE) tools considering the statics and dynamics through actual time dependence involving inertial forces acting on the mechanism joints. For a 1.2 m mirror made of hexagonal segments, with simple harnesses and three-point supports, the maximum diameter is 400 mm, minimum segment base thickness is 1.5 mm, and maximum rib height is considered as 12 mm. Manufacturing challenges are explored for the segments using manufacturing research and development approaches to enable use of large lightweight mirrors required for the future space system.Keywords: dynamics, manufacturing, reflectors, segmentation, statics
Procedia PDF Downloads 3733446 Finite Element Analysis of the Blanking and Stamping Processes of Nuclear Fuel Spacer Grids
Authors: Rafael Oliveira Santos, Luciano Pessanha Moreira, Marcelo Costa Cardoso
Abstract:
Spacer grid assembly supporting the nuclear fuel rods is an important concern in the design of structural components of a Pressurized Water Reactor (PWR). The spacer grid is composed by springs and dimples which are formed from a strip sheet by means of blanking and stamping processes. In this paper, the blanking process and tooling parameters are evaluated by means of a 2D plane-strain finite element model in order to evaluate the punch load and quality of the sheared edges of Inconel 718 strips used for nuclear spacer grids. A 3D finite element model is also proposed to predict the tooling loads resulting from the stamping process of a preformed Inconel 718 strip and to analyse the residual stress effects upon the spring and dimple design geometries of a nuclear spacer grid.Keywords: blanking process, damage model, finite element modelling, inconel 718, spacer grids, stamping process
Procedia PDF Downloads 3453445 Calibration of Discrete Element Method Parameters for Modelling DRI Pellets Flow
Authors: A. Hossein Madadi-Najafabadi, Masoud Nasiri
Abstract:
The discrete element method is a powerful technique for numerical modeling the flow of granular materials such as direct reduced iron. It would enable us to study processes and equipment related to the production and handling of the material. However, the characteristics and properties of the granules have to be adjusted precisely to achieve reliable results in a DEM simulation. The main properties for DEM simulation are size distribution, density, Young's modulus, Poisson's ratio and the contact coefficients of restitution, rolling friction and sliding friction. In the present paper, the mentioned properties are determined for DEM simulation of DRI pellets. A reliable DEM simulation would contribute to optimizing the handling system of DRIs in an iron-making plant. Among the mentioned properties, Young's modulus is the most important parameter, which is usually hard to get for particulate solids. Here, an especial method is utilized to precisely determine this parameter for DRI.Keywords: discrete element method, direct reduced iron, simulation parameters, granular material
Procedia PDF Downloads 1803444 Optics Meets Microfluidics for Highly Sensitive Force Sensing
Authors: Iliya Dimitrov Stoev, Benjamin Seelbinder, Elena Erben, Nicola Maghelli, Moritz Kreysing
Abstract:
Despite the revolutionizing impact of optical tweezers in materials science and cell biology up to the present date, trapping has so far extensively relied on specific material properties of the probe and local heating has limited applications related to investigating dynamic processes within living systems. To overcome these limitations while maintaining high sensitivity, here we present a new optofluidic approach that can be used to gently trap microscopic particles and measure femtoNewton forces in a contact-free manner and with thermally limited precision.Keywords: optofluidics, force measurements, microrheology, FLUCS, thermoviscous flows
Procedia PDF Downloads 1713443 Modelling and Simulation of Milk Fouling
Authors: Harche Rima, Laoufi Nadia Aicha
Abstract:
This work focuses on the study and modeling of the fouling phenomenon in a vertical pipe. In the first step, milk is one of the fluids obeying the phenomenon of fouling because of the denaturation of these proteins, especially lactoglobulin, which is the active element of milk, and to facilitate its use, we chose to study milk as a fouling fluid. In another step, we consider the test section of our installation as a tubular-type heat exchanger that works against the current and in a closed circuit. A simple mathematical model of Kern & Seaton, based on the kinetics of the fouling resistance, was used to evaluate the influence of the operating parameters (fluid flow velocity and exchange wall temperature) on the fouling resistance. The influence of the variation of the fouling resistance with the operating conditions on the efficiency of the heat exchanger and the importance of the dirty state exchange coefficient as an exchange quality control parameter were discussed and examined. On the other hand, an electronic scanning microscope analysis was performed on the milk deposit in order to obtain its actual image and composition, which allowed us to calculate the thickness of this deposit.Keywords: fouling, milk, tubular heat exchanger, fouling resistance
Procedia PDF Downloads 523442 Mecano-Reliability Approach Applied to a Water Storage Tank Placed on Ground
Authors: Amar Aliche, Hocine Hammoum, Karima Bouzelha, Arezki Ben Abderrahmane
Abstract:
Traditionally, the dimensioning of storage tanks is conducted with a deterministic approach based on partial coefficients of safety. These coefficients are applied to take into account the uncertainties related to hazards on properties of materials used and applied loads. However, the use of these safety factors in the design process does not assure an optimal and reliable solution and can sometimes lead to a lack of robustness of the structure. The reliability theory based on a probabilistic formulation of constructions safety can respond in an adapted manner. It allows constructing a modelling in which uncertain data are represented by random variables, and therefore allows a better appreciation of safety margins with confidence indicators. The work presented in this paper consists of a mecano-reliability analysis of a concrete storage tank placed on ground. The classical method of Monte Carlo simulation is used to evaluate the failure probability of concrete tank by considering the seismic acceleration as random variable.Keywords: reliability approach, storage tanks, monte carlo simulation, seismic acceleration
Procedia PDF Downloads 3083441 The Physical and Physiological Profile of Professional Muay Thai Boxers
Authors: Lucy Horrobin, Rebecca Fores
Abstract:
Background: Muay Thai is an increasingly popular combat sport worldwide. Further academic research in the sport will contribute to its professional development. This research sought to produce normative data in relation to the physical and physiological characteristics of professional Muay Thai boxers, as, currently no such data exists. The ultimate aim being to inform appropriate training programs and to facilitate coaching. Methods: N = 9 professional, adult, male Muay Thai boxers were assessed for the following anthropometric, physical and physiological characteristics, using validated methods of assessment: body fat, hamstring flexibility, maximal dynamic upper body strength, lower limb peak power, upper body muscular endurance and aerobic capacity. Raw data scores were analysed for mean, range and SD and where applicable were expressed relative to body mass (BM). Results: Results showed similar characteristics to those found in other combat sports. Low percentages of body fat (mean±SD) 8.54 ± 1.16 allow for optimal power to weight ratios. Highly developed aerobic capacity (mean ±SD) 61.56 ± 5.13 ml.min.kg facilitate recovery and power maintenance throughout bouts. Lower limb peak power output values of (mean ± SD) 12.60 ± 2.09 W/kg indicate that Muay Thai boxers are amongst the most powerful of combat sport athletes. However, maximal dynamic upper body strength scores of (mean±SD) 1.14 kg/kg ± 0.18 were in only the 60th percentile of normative data for the general population and muscular endurance scores (mean±SD) 31.55 ± 11.95 and flexibility scores (mean±SD) 19.55 ± 11.89 cm expressed wide standard deviation. These results might suggest that these characteristics are insignificant in Muay Thai or under-developed, perhaps due to deficient training programs. Implications: This research provides the first normative data of physical and physiological characteristics of Muay Thai boxers. The findings of this study would aid trainers and coaches when designing effective evidence-based training programs. Furthermore, it provides a foundation for further research relating to physiology in Muay Thai. Areas of further study could be determining the physiological demands of a full rules bout and the effects of evidence-based training programs on performance.Keywords: fitness testing, Muay Thai, physiology, strength and conditioning
Procedia PDF Downloads 2303440 The Effect of Annual Weather and Sowing Date on Different Genotype of Maize (Zea mays L.) in Germination and Yield
Authors: Ákos Tótin
Abstract:
In crop production the most modern hybrids are available for us, therefore the yield and yield stability is determined by the agro-technology. The purpose of the experiment is to adapt the modern agrotechnology to the new type of hybrids. The long-term experiment was set up in 2015-2016 on chernozem soil in the Hajdúság (eastern Hungary). The plots were set up in 75 thousand ha-1 plant density. We examined some mainly use hybrids of Hungary. The conducted studies are: germination dynamic, growing dynamic and the effect of annual weather for the yield. We use three different sowing date as early, average and late, and measure how many plant germinated during the germination process. In the experiment, we observed the germination dynamics in 6 hybrid in 4 replication. In each replication, we counted the germinated plants in 2m long 2 row wide area. Data will be shown in the average of the 6 hybrid and 4 replication. Growing dynamics were measured from the 10cm (4-6 leaf) plant highness. We measured 10 plants’ height in two weeks replication. The yield was measured buy a special plot harvester - the Sampo Rosenlew 2010 – what measured the weight of the harvested plot and also took a sample from it. We determined the water content of the samples for the water release dynamics. After it, we calculated the yield (t/ha) of each plot at 14% of moisture content to compare them. We evaluated the data using Microsoft Excel 2015. The annual weather in each crop year define the maize germination dynamics because the amount of heat is determinative for the plants. In cooler crop year the weather is prolonged the germination. At the 2015 crop year the weather was cold in the beginning what prolonged the first sowing germination. But the second and third sowing germinated faster. In the 2016 crop year the weather was much favorable for plants so the first sowing germinated faster than in the previous year. After it the weather cooled down, therefore the second and third sowing germinated slower than the last year. The statistical data analysis program determined that there is a significant difference between the early and late sowing date growing dynamics. In 2015 the first sowing date had the highest amount of yield. The second biggest yield was in the average sowing time. The late sowing date has lowest amount of yield.Keywords: germination, maize, sowing date, yield
Procedia PDF Downloads 2313439 Revolutionary Solutions for Modeling and Visualization of Complex Software Systems
Abstract:
Existing software modeling and visualization approaches using UML are outdated, which are outcomes of reductionism and the superposition principle that the whole of a system is the sum of its parts, so that with them all tasks of software modeling and visualization are performed linearly, partially, and locally. This paper introduces revolutionary solutions for modeling and visualization of complex software systems, which make complex software systems much easy to understand, test, and maintain. The solutions are based on complexity science, offering holistic, automatic, dynamic, virtual, and executable approaches about thousand times more efficient than the traditional ones.Keywords: complex systems, software maintenance, software modeling, software visualization
Procedia PDF Downloads 4013438 Extraction of the Volatile Oils of Dictyopteris Membranacea by Focused Microwave Assisted Hydrodistillation and Supercritical Carbon Dioxide: Chemical Composition and Kinetic Data
Authors: Mohamed El Hattab
Abstract:
The Supercritical carbon dioxide (SFE) and the focused microwave-assisted hydrodistillation (FMAHD) were employed to isolate the volatile fraction of the brown alga Dictyopteris membranacea from the crude extract. The volatiles fractions obtained were analyzed by GC/MS. The major compounds in this case: dictyopterene A, 6-butylcyclohepta-1,4-diene, Undec-1-en-3-one, Undeca-1,4-dien-3-one, (3-oxoundec-4-enyl) sulphur, tetradecanoic acid, hexadecanoic acid, 3-hexyl-4,5-dithia-cycloheptanone and albicanol (this later is present only in the FMAHD oil) are identified by comparing their mass spectra with those reported on the commercial MS data base and also on our previously work. A kinetic study realized on both extraction processes and followed by an external standard quantification has allowed the study of the mass percent evolution of the major compounds in the two oils, an empirical mathematical modelling was used to describe their kinetic extraction.Keywords: dictyopteris membranacea, extraction techniques, mathematical modeling, volatile oils
Procedia PDF Downloads 4283437 Health Reforms in Central and Eastern European Countries: Results, Dynamics, and Outcomes Measure
Authors: Piotr Romaniuk, Krzysztof Kaczmarek, Adam Szromek
Abstract:
Background: A number of approaches to assess the performance of health system have been proposed so far. Nonetheless, they lack a consensus regarding the key components of assessment procedure and criteria of evaluation. The WHO and OECD have developed methods of assessing health system to counteract the underlying issues, but they are not free of controversies and did not manage to produce a commonly accepted consensus. The aim of the study: On the basis of WHO and OECD approaches we decided to develop own methodology to assess the performance of health systems in Central and Eastern European countries. We have applied the method to compare the effects of health systems reforms in 20 countries of the region, in order to evaluate the dynamic of changes in terms of health system outcomes.Methods: Data was collected from a 25-year time period after the fall of communism, subsetted into different post-reform stages. Datasets collected from individual countries underwent one-, two- or multi-dimensional statistical analyses, and the Synthetic Measure of health system Outcomes (SMO) was calculated, on the basis of the method of zeroed unitarization. A map of dynamics of changes over time across the region was constructed. Results: When making a comparative analysis of the tested group in terms of the average SMO value throughout the analyzed period, we noticed some differences, although the gaps between individual countries were small. The countries with the highest SMO were the Czech Republic, Estonia, Poland, Hungary and Slovenia, while the lowest was in Ukraine, Russia, Moldova, Georgia, Albania, and Armenia. Countries differ in terms of the range of SMO value changes throughout the analyzed period. The dynamics of change is high in the case of Estonia and Latvia, moderate in the case of Poland, Hungary, Czech Republic, Croatia, Russia and Moldova, and small when it comes to Belarus, Ukraine, Macedonia, Lithuania, and Georgia. This information reveals fluctuation dynamics of the measured value in time, yet it does not necessarily mean that in such a dynamic range an improvement appears in a given country. In reality, some of the countries moved from on the scale with different effects. Albania decreased the level of health system outcomes while Armenia and Georgia made progress, but lost distance to leaders in the region. On the other hand, Latvia and Estonia showed the most dynamic progress in improving the outcomes. Conclusions: Countries that have decided to implement comprehensive health reform have achieved a positive result in terms of further improvements in health system efficiency levels. Besides, a higher level of efficiency during the initial transition period generally positively determined the subsequent value of the efficiency index value, but not the dynamics of change. The paths of health system outcomes improvement are highly diverse between different countries. The instrument we propose constitutes a useful tool to evaluate the effectiveness of reform processes in post-communist countries, but more studies are needed to identify factors that may determine results obtained by individual countries, as well as to eliminate the limitations of methodology we applied.Keywords: health system outcomes, health reforms, health system assessment, health system evaluation
Procedia PDF Downloads 2903436 Democracy as a Curve: A Study on How Democratization Impacts Economic Growth
Authors: Henrique Alpalhão
Abstract:
This paper attempts to model the widely studied relationship between a country's economic growth and its level of democracy, with an emphasis on possible non-linearities. We adopt the concept of 'political capital' as a measure of democracy, which is extremely uncommon in the literature and brings considerable advantages both in terms of dynamic considerations and plausibility. While the literature is not consensual on this matter, we obtain, via panel Arellano-Bond regression analysis on a database of more than 60 countries over 50 years, significant and robust results that indicate that the impact of democratization on economic growth varies according to the stage of democratic development each country is in.Keywords: democracy, economic growth, political capital, political economy
Procedia PDF Downloads 3213435 Structural Molecular Dynamics Modelling of FH2 Domain of Formin DAAM
Authors: Rauan Sakenov, Peter Bukovics, Peter Gaszler, Veronika Tokacs-Kollar, Beata Bugyi
Abstract:
FH2 (formin homology-2) domains of several proteins, collectively known as formins, including DAAM, DAAM1 and mDia1, promote G-actin nucleation and elongation. FH2 domains of these formins exist as oligomers. Chain dimerization by ring structure formation serves as a structural basis for actin polymerization function of FH2 domain. Proper single chain configuration and specific interactions between its various regions are necessary for individual chains to form a dimer functional in G-actin nucleation and elongation. FH1 and WH2 domain-containing formins were shown to behave as intrinsically disordered proteins. Thus, the aim of this research was to study structural dynamics of FH2 domain of DAAM. To investigate structural features of FH2 domain of DAAM, molecular dynamics simulation of chain A of FH2 domain of DAAM solvated in water box in 50 mM NaCl was conducted at temperatures from 293.15 to 353.15K, with VMD 1.9.2, NAMD 2.14 and Amber Tools 21 using 2z6e and 1v9d PDB structures of DAAM was obtained on I-TASSER webserver. Calcium and ATP bound G-actin 3hbt PDB structure was used as a reference protein with well-described structural dynamics of denaturation. Topology and parameter information of CHARMM 2012 additive all-atom force fields for proteins, carbohydrate derivatives, water and ions were used in NAMD 2.14 and ff19SB force field for proteins in Amber Tools 21. The systems were energy minimized for the first 1000 steps, equilibrated and produced in NPT ensemble for 1ns using stochastic Langevin dynamics and the particle mesh Ewald method. Our root-mean square deviation (RMSD) analysis of molecular dynamics of chain A of FH2 domains of DAAM revealed similar insignificant changes of total molecular average RMSD values of FH2 domain of these formins at temperatures from 293.15 to 353.15K. In contrast, total molecular average RMSD values of G-actin showed considerable increase at 328K, which corresponds to the denaturation of G-actin molecule at this temperature and its transition from native, ordered, to denatured, disordered, state which is well-described in the literature. RMSD values of lasso and tail regions of chain A of FH2 domain of DAAM exhibited higher than total molecular average RMSD at temperatures from 293.15 to 353.15K. These regions are functional in intra- and interchain interactions and contain highly conserved tryptophan residues of lasso region, highly conserved GNYMN sequence of post region and amino acids of the shell of hydrophobic pocket of the salt bridge between Arg171 and Asp321, which are important for structural stability and ordered state of FH2 domain of DAAM and its functions in FH2 domain dimerization. In conclusion, higher than total molecular average RMSD values of lasso and post regions of chain A of FH2 domain of DAAM may explain disordered state of FH2 domain of DAAM at temperatures from 293.15 to 353.15K. Finally, absence of marked transition, in terms of significant changes in average molecular RMSD values between native and denatured states of FH2 domain of DAAM at temperatures from 293.15 to 353.15K, can make it possible to attribute these formins to the group of intrinsically disordered proteins rather than to the group of intrinsically ordered proteins such as G-actin.Keywords: FH2 domain, DAAM, formins, molecular modelling, computational biophysics
Procedia PDF Downloads 1363434 Numerical Modeling of Flow in USBR II Stilling Basin with End Adverse Slope
Authors: Hamidreza Babaali, Alireza Mojtahedi, Nasim Soori, Saba Soori
Abstract:
Hydraulic jump is one of the effective ways of energy dissipation in stilling basins that the energy is highly dissipated by jumping. Adverse slope surface at the end stilling basin is caused to increase energy dissipation and stability of the hydraulic jump. In this study, the adverse slope has been added to end of United States Bureau of Reclamation (USBR) II stilling basin in hydraulic model of Nazloochay dam with scale 1:40, and flow simulated into stilling basin using Flow-3D software. The numerical model is verified by experimental data of water depth in stilling basin. Then, the parameters of water level profile, Froude Number, pressure, air entrainment and turbulent dissipation investigated for discharging 300 m3/s using K-Ɛ and Re-Normalization Group (RNG) turbulence models. The results showed a good agreement between numerical and experimental model as numerical model can be used to optimize of stilling basins.Keywords: experimental and numerical modelling, end adverse slope, flow parameters, USBR II stilling basin
Procedia PDF Downloads 1793433 Scalar Modulation Technique for Six-Phase Matrix Converter Fed Series-Connected Two-Motor Drives
Authors: A. Djahbar, M. Aillerie, E. Bounadja
Abstract:
In this paper we treat a new structure of a high-power actuator which is used to either industry or electric traction. Indeed, the actuator is constituted by two induction motors, the first is a six-phase motor connected in series with another three-phase motor via the stators. The whole is supplied by a single static converter. Our contribution in this paper is the optimization of the system supply source. This is feeding the multimotor group by a direct converter frequency without using the DC-link capacitor. The modelling of the components of multimotor system is presented first. Only the first component of stator currents is used to produce the torque/flux of the first machine in the group. The second component of stator currents is considered as additional degrees of freedom and which can be used for power conversion for the other connected motors. The decoupling of each motor from the group is obtained using the direct vector control scheme. Simulation results demonstrate the effectiveness of the proposed structure.Keywords: induction machine, motor drives, scalar modulation technique, three-to-six phase matrix converter
Procedia PDF Downloads 5483432 Serious Digital Video Game for Solving Algebraic Equations
Authors: Liliana O. Martínez, Juan E González, Manuel Ramírez-Aranda, Ana Cervantes-Herrera
Abstract:
A serious game category mobile application called Math Dominoes is presented. The main objective of this applications is to strengthen the teaching-learning process of solving algebraic equations and is based on the board game "Double 6" dominoes. Math Dominoes allows the practice of solving first, second-, and third-degree algebraic equations. This application is aimed to students who seek to strengthen their skills in solving algebraic equations in a dynamic, interactive, and fun way, to reduce the risk of failure in subsequent courses that require mastery of this algebraic tool.Keywords: algebra, equations, dominoes, serious games
Procedia PDF Downloads 1303431 Some Conjectures and Programs about Computing the Detour Index of Molecular Graphs of Nanotubes
Authors: Shokofeh Ebrtahimi
Abstract:
Let G be the chemical graph of a molecule. The matrix D = [dij ] is called the detour matrix of G, if dij is the length of longest path between atoms i and j. The sum of all entries above the main diagonal of D is called the detour index of G.Chemical graph theory is the topology branch of mathematical chemistry which applies graph theory to mathematical modelling of chemical phenomena.[1] The pioneers of the chemical graph theory are Alexandru Balaban, Ante Graovac, Ivan Gutman, Haruo Hosoya, Milan Randić and Nenad TrinajstićLet G be the chemical graph of a molecule. The matrix D = [dij ] is called the detour matrix of G, if dij is the length of longest path between atoms i and j. The sum of all entries above the main diagonal of D is called the detour index of G. In this paper, a new program for computing the detour index of molecular graphs of nanotubes by heptagons is determineded. Some Conjectures about detour index of Molecular graphs of nanotubes is included.Keywords: chemical graph, detour matrix, Detour index, carbon nanotube
Procedia PDF Downloads 2923430 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming
Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero
Abstract:
Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up
Procedia PDF Downloads 2443429 Time-Dependent Behaviour of Reinforced Concrete Beams under Sustained and Repeated Loading
Authors: Sultan Daud, John P. Forth, Nikolaos Nikitas
Abstract:
The current study aims to highlight the loading characteristics impact on the time evolution (focusing particularly on long term effects) of the deformation of realized reinforced concrete beams. Namely the tension stiffening code provisions (i.e. within Eurocode 2) are reviewed with a clear intention to reassess their operational value and predicting capacity. In what follows the experimental programme adopted along with some preliminary findings and numerical modelling attempts are presented. For a range of long slender reinforced concrete simply supported beams (4200 mm) constant static sustained and repeated cyclic loadings were applied mapping the time evolution of deformation. All experiments were carried out at the Heavy Structures Lab of the University of Leeds. During tests the mid-span deflection, creep coefficient and shrinkage strains were monitored for duration of 90 days. The obtained results are set against the values predicted by Eurocode 2 and the tools within an FE commercial package (i.e. Midas FEA) to yield that existing knowledge and practise is at times over-conservative.Keywords: Eurocode2, midas fea, repeated, sustained loading.
Procedia PDF Downloads 3473428 A Comparative Study on Sampling Techniques of Polynomial Regression Model Based Stochastic Free Vibration of Composite Plates
Authors: S. Dey, T. Mukhopadhyay, S. Adhikari
Abstract:
This paper presents an exhaustive comparative investigation on sampling techniques of polynomial regression model based stochastic natural frequency of composite plates. Both individual and combined variations of input parameters are considered to map the computational time and accuracy of each modelling techniques. The finite element formulation of composites is capable to deal with both correlated and uncorrelated random input variables such as fibre parameters and material properties. The results obtained by Polynomial regression (PR) using different sampling techniques are compared. Depending on the suitability of sampling techniques such as 2k Factorial designs, Central composite design, A-Optimal design, I-Optimal, D-Optimal, Taguchi’s orthogonal array design, Box-Behnken design, Latin hypercube sampling, sobol sequence are illustrated. Statistical analysis of the first three natural frequencies is presented to compare the results and its performance.Keywords: composite plate, natural frequency, polynomial regression model, sampling technique, uncertainty quantification
Procedia PDF Downloads 5133427 Walking in a Weather rather than a Climate: Critique on the Meta-Narrative of Buddhism in Early India
Authors: Yongjun Kim
Abstract:
Since the agreement on the historicity of historical Buddha in eastern India, the beginning, heyday and decline of Buddhism in Early India have been discussed in urbanization, commercialism and state formation context, in short, Weberian socio-politico frame. Recent Scholarship, notably in archaeology and anthropology, has proposed ‘re-materialization of Buddhism in Early India’ based on what Buddhist had actually done rather than what they should do according to canonical teachings or philosophies. But its historical narrations still remain with a domain of socio-politico meta-narrative which tends to unjustifiably dismiss the naturally existing heterogeneity and often chaotic dynamic of diverse agencies, landscape perceptions, localized traditions, etc. An author will argue the multiplicity of theoretical standpoints for the reconstruction on the Buddhism in Early India. For this, at first, the diverse agencies, localized traditions, landscape patterns of Buddhist communities and monasteries in Trans-Himalayan regions; focusing Zanskar Valley and Spiti Valley in India will be illustrated based on an author’s field work. And then an author will discuss this anthropological landscape analysis is better appropriated with textual and archaeological evidences on the tension between urban monastic and forest Buddhism, the phenomena of sacred landscape, cemetery, garden, natural cave along with socio-economic landscape, the demographic heterogeneity in Early India. Finally, it will be attempted to compare between anthropological landscape of present Trans-Himalayan and archaeological one of ancient Western India. The study of Buddhism in Early India has hardly been discussed through multivalent theoretical archaeology and anthropology of religion, thus traditional and recent scholarship have produced historical meta-narrative though heterogeneous among them. The multidisciplinary approaches of textual critics, archaeology and anthropology will surely help to deconstruct the grand and all-encompassing historical description on Buddhism in Early India and then to reconstruct the localized, behavioral and multivalent narratives. This paper expects to highlight the importance of lesser-studied Buddhist archaeological sites and the dynamic views on religious landscape in Early India with a help of critical anthropology of religion.Keywords: analogy by living traditions, Buddhism in Early India, landscape analysis, meta-narrative
Procedia PDF Downloads 3333426 Sustainability Performance in the Post-Pandemic Era: Employee Resilience Impact on Improving Employee and Organizational Performance
Authors: Sonali Mohite
Abstract:
Severe changes to Organizational Sustainability (OS) have been brought about by the COVID-19 pandemic. This situation forces organizations to tackle the competencies required to augment Employee Resilience (ER) and make profitable growth. This study explores how employee resilience contributes to both individual and organizational success in the wake of the COVID-19 pandemic. We suggest that employees who possess strong coping mechanisms and adaptability are better equipped to handle ongoing disruptions, resulting in improved individual performance metrics like productivity, engagement, and innovative thinking. Hence, exploring the efficiency of ER in improving EP and OS in post-pandemic (PP) is the aim of this research. By utilizing convenience sampling techniques, a total of 422 employees have been collected from numerous organizations. After that, the study’s hypothesis is analysed by using Structural Equation Modelling (SEM). As per the study’s findings, the ER factors of “Job Satisfaction (JS)”, “Self-Efficacy (SE)”, “Supervisors’ Support (SS)”, and “Facilitating Conditions (FC)” have positive and significant associations with organizational efficiency. Furthermore, the study’s findings also exhibited that there is the most important relation between SE and EOP.Keywords: employee resilience, employee performance, organizational performance, sustainability, post-pandemic
Procedia PDF Downloads 223425 Protocol for Dynamic Load Distributed Low Latency Web-Based Augmented Reality and Virtual Reality
Authors: Rohit T. P., Sahil Athrij, Sasi Gopalan
Abstract:
Currently, the content entertainment industry is dominated by mobile devices. As the trends slowly shift towards Augmented/Virtual Reality applications the computational demands on these devices are increasing exponentially and we are already reaching the limits of hardware optimizations. This paper proposes a software solution to this problem. By leveraging the capabilities of cloud computing we can offload the work from mobile devices to dedicated rendering servers that are way more powerful. But this introduces the problem of latency. This paper introduces a protocol that can achieve high-performance low latency Augmented/Virtual Reality experience. There are two parts to the protocol, 1) In-flight compression The main cause of latency in the system is the time required to transmit the camera frame from client to server. The round trip time is directly proportional to the amount of data transmitted. This can therefore be reduced by compressing the frames before sending. Using some standard compression algorithms like JPEG can result in minor size reduction only. Since the images to be compressed are consecutive camera frames there won't be a lot of changes between two consecutive images. So inter-frame compression is preferred. Inter-frame compression can be implemented efficiently using WebGL but the implementation of WebGL limits the precision of floating point numbers to 16bit in most devices. This can introduce noise to the image due to rounding errors, which will add up eventually. This can be solved using an improved interframe compression algorithm. The algorithm detects changes between frames and reuses unchanged pixels from the previous frame. This eliminates the need for floating point subtraction thereby cutting down on noise. The change detection is also improved drastically by taking the weighted average difference of pixels instead of the absolute difference. The kernel weights for this comparison can be fine-tuned to match the type of image to be compressed. 2) Dynamic Load distribution Conventional cloud computing architectures work by offloading as much work as possible to the servers, but this approach can cause a hit on bandwidth and server costs. The most optimal solution is obtained when the device utilizes 100% of its resources and the rest is done by the server. The protocol balances the load between the server and the client by doing a fraction of the computing on the device depending on the power of the device and network conditions. The protocol will be responsible for dynamically partitioning the tasks. Special flags will be used to communicate the workload fraction between the client and the server and will be updated in a constant interval of time ( or frames ). The whole of the protocol is designed so that it can be client agnostic. Flags are available to the client for resetting the frame, indicating latency, switching mode, etc. The server can react to client-side changes on the fly and adapt accordingly by switching to different pipelines. The server is designed to effectively spread the load and thereby scale horizontally. This is achieved by isolating client connections into different processes.Keywords: 2D kernelling, augmented reality, cloud computing, dynamic load distribution, immersive experience, mobile computing, motion tracking, protocols, real-time systems, web-based augmented reality application
Procedia PDF Downloads 743424 Laser Additive Manufacturing of Carbon Nanotube-Reinforced Polyamide 12 Composites
Authors: Kun Zhou
Abstract:
Additive manufacturing has emerged as a disruptive technology that is capable of manufacturing products with complex geometries through an accumulation of material feedstock in a layer-by-layer fashion. Laser additive manufacturing such as selective laser sintering has excellent printing resolution, high printing speed and robust part strength, and has led to a widespread adoption in the aerospace, automotive and biomedical industries. This talk highlights and discusses the recent work we have undertaken in the development of carbon nanotube-reinforced polyamide 12 (CNT/PA12) composites printed using laser additive manufacturing. Numerical modelling studies have been conducted to simulate various processes within laser additive manufacturing of CNT/PA12 composites, and extensive experimental work has been carried out to investigate the mechanical and functional properties of the printed parts. The results from these studies grant a deeper understanding of the intricate mechanisms occurring within each process and enables an accurate optimization of process parameters for the CNT/PA12 and other polymer composites.Keywords: CNT/PA12 composites, laser additive manufacturing, process parameter optimization, numerical modeling
Procedia PDF Downloads 1533423 Conceptual Perimeter Model for Estimating Building Envelope Quantities
Authors: Ka C. Lam, Oluwafunmibi S. Idowu
Abstract:
Building girth is important in building economics and mostly used in quantities take-off of various cost items. Literature suggests that the use of conceptual quantities can improve the accuracy of cost models. Girth or perimeter of a building can be used to estimate conceptual quantities. Hence, the current paper aims to model the perimeter-area function of buildings shapes for use at the conceptual design stage. A detailed literature review on existing building shape indexes was carried out. An empirical approach was used to study the relationship between area and the shortest length of a four-sided orthogonal polygon. Finally, a mathematical approach was used to establish the observed relationships. The empirical results obtained were in agreement with the mathematical model developed. A new equation termed “conceptual perimeter equation” is proposed. The equation can be used to estimate building envelope quantities such as external wall area, external finishing area and scaffolding area before sketch or detailed drawings are prepared.Keywords: building envelope, building shape index, conceptual quantities, cost modelling, girth
Procedia PDF Downloads 3433422 Topic Modelling Using Latent Dirichlet Allocation and Latent Semantic Indexing on SA Telco Twitter Data
Authors: Phumelele Kubheka, Pius Owolawi, Gbolahan Aiyetoro
Abstract:
Twitter is one of the most popular social media platforms where users can share their opinions on different subjects. As of 2010, The Twitter platform generates more than 12 Terabytes of data daily, ~ 4.3 petabytes in a single year. For this reason, Twitter is a great source for big mining data. Many industries such as Telecommunication companies can leverage the availability of Twitter data to better understand their markets and make an appropriate business decision. This study performs topic modeling on Twitter data using Latent Dirichlet Allocation (LDA). The obtained results are benchmarked with another topic modeling technique, Latent Semantic Indexing (LSI). The study aims to retrieve topics on a Twitter dataset containing user tweets on South African Telcos. Results from this study show that LSI is much faster than LDA. However, LDA yields better results with higher topic coherence by 8% for the best-performing model represented in Table 1. A higher topic coherence score indicates better performance of the model.Keywords: big data, latent Dirichlet allocation, latent semantic indexing, telco, topic modeling, twitter
Procedia PDF Downloads 1503421 Enhance the Power of Sentiment Analysis
Authors: Yu Zhang, Pedro Desouza
Abstract:
Since big data has become substantially more accessible and manageable due to the development of powerful tools for dealing with unstructured data, people are eager to mine information from social media resources that could not be handled in the past. Sentiment analysis, as a novel branch of text mining, has in the last decade become increasingly important in marketing analysis, customer risk prediction and other fields. Scientists and researchers have undertaken significant work in creating and improving their sentiment models. In this paper, we present a concept of selecting appropriate classifiers based on the features and qualities of data sources by comparing the performances of five classifiers with three popular social media data sources: Twitter, Amazon Customer Reviews, and Movie Reviews. We introduced a couple of innovative models that outperform traditional sentiment classifiers for these data sources, and provide insights on how to further improve the predictive power of sentiment analysis. The modelling and testing work was done in R and Greenplum in-database analytic tools.Keywords: sentiment analysis, social media, Twitter, Amazon, data mining, machine learning, text mining
Procedia PDF Downloads 353