Search results for: lithological modeling
1075 An Investigation of the Association between Pathological Personality Dimensions and Emotion Dysregulation among Virtual Network Users: The Mediating Role of Cyberchondria Behaviors
Authors: Mehdi Destani, Asghar Heydari
Abstract:
Objective: The present study aimed to investigate the association between pathological personality dimensions and emotion dysregulation through the mediating role of Cyberchondria behaviors among users of virtual networks. Materials and methods: A descriptive–correlational research method was used in this study, and the statistical population consisted of all people active on social network sites in 2020. The sample size was 300 people who were selected through Convenience Sampling. Data collection was carried out in a survey method using online questionnaires, including the "Difficulties in Emotion Regulation Scale" (DERS), Personality Inventory for DSM-5 Brief Form (PID-5-BF), and Cyberchondria Severity Scale Brief Form (CSS-12). Data analysis was conducted using Pearson's Correlation Coefficient and Structural Equation Modeling (SEM). Findings: Findings suggested that pathological personality dimensions and Cyberchondria behaviors have a positive and significant association with emotion dysregulation (p<0.001). The presented model had a good fit with the data. The variable “pathological personality dimensions” with an overall effect (p<0.001, β=0.658), a direct effect (p<0.001, β=0.528), and an indirect mediating effect through Cyberchondria Behaviors (p<.001), β=0.130), accounted for emotion dysregulation among virtual network users. Conclusion: The research findings showed a necessity to pay attention to the pathological personality dimensions as a determining variable and Cyberchondria behaviors as a mediator in the vulnerability of users of social network sites to emotion dysregulation.Keywords: cyberchondria, emotion dysregulation, pathological personality dimensions, social networks
Procedia PDF Downloads 1021074 Determining of the Performance of Data Mining Algorithm Determining the Influential Factors and Prediction of Ischemic Stroke: A Comparative Study in the Southeast of Iran
Authors: Y. Mehdipour, S. Ebrahimi, A. Jahanpour, F. Seyedzaei, B. Sabayan, A. Karimi, H. Amirifard
Abstract:
Ischemic stroke is one of the common reasons for disability and mortality. The fourth leading cause of death in the world and the third in some other sources. Only 1/3 of the patients with ischemic stroke fully recover, 1/3 of them end in permanent disability and 1/3 face death. Thus, the use of predictive models to predict stroke has a vital role in reducing the complications and costs related to this disease. Thus, the aim of this study was to specify the effective factors and predict ischemic stroke with the help of DM methods. The present study was a descriptive-analytic study. The population was 213 cases from among patients referring to Ali ibn Abi Talib (AS) Hospital in Zahedan. Data collection tool was a checklist with the validity and reliability confirmed. This study used DM algorithms of decision tree for modeling. Data analysis was performed using SPSS-19 and SPSS Modeler 14.2. The results of the comparison of algorithms showed that CHAID algorithm with 95.7% accuracy has the best performance. Moreover, based on the model created, factors such as anemia, diabetes mellitus, hyperlipidemia, transient ischemic attacks, coronary artery disease, and atherosclerosis are the most effective factors in stroke. Decision tree algorithms, especially CHAID algorithm, have acceptable precision and predictive ability to determine the factors affecting ischemic stroke. Thus, by creating predictive models through this algorithm, will play a significant role in decreasing the mortality and disability caused by ischemic stroke.Keywords: data mining, ischemic stroke, decision tree, Bayesian network
Procedia PDF Downloads 1721073 Non-Linear Assessment of Chromatographic Lipophilicity and Model Ranking of Newly Synthesized Steroid Derivatives
Authors: Milica Karadzic, Lidija Jevric, Sanja Podunavac-Kuzmanovic, Strahinja Kovacevic, Anamarija Mandic, Katarina Penov Gasi, Marija Sakac, Aleksandar Okljesa, Andrea Nikolic
Abstract:
The present paper deals with chromatographic lipophilicity prediction of newly synthesized steroid derivatives. The prediction was achieved using in silico generated molecular descriptors and quantitative structure-retention relationship (QSRR) methodology with the artificial neural networks (ANN) approach. Chromatographic lipophilicity of the investigated compounds was expressed as retention factor value logk. For QSRR modeling, a feedforward back-propagation ANN with gradient descent learning algorithm was applied. Using the novel sum of ranking differences (SRD) method generated ANN models were ranked. The aim was to distinguish the most consistent QSRR model that can be found, and similarity or dissimilarity between the models that could be noticed. In this study, SRD was performed with average values of retention factor value logk as reference values. An excellent correlation between experimentally observed retention factor value logk and values predicted by the ANN was obtained with a correlation coefficient higher than 0.9890. Statistical results show that the established ANN models can be applied for required purpose. This article is based upon work from COST Action (TD1305), supported by COST (European Cooperation in Science and Technology).Keywords: artificial neural networks, liquid chromatography, molecular descriptors, steroids, sum of ranking differences
Procedia PDF Downloads 3191072 Assessing Effects of an Intervention on Bottle-Weaning and Reducing Daily Milk Intake from Bottles in Toddlers Using Two-Part Random Effects Models
Authors: Yungtai Lo
Abstract:
Two-part random effects models have been used to fit semi-continuous longitudinal data where the response variable has a point mass at 0 and a continuous right-skewed distribution for positive values. We review methods proposed in the literature for analyzing data with excess zeros. A two-part logit-log-normal random effects model, a two-part logit-truncated normal random effects model, a two-part logit-gamma random effects model, and a two-part logit-skew normal random effects model were used to examine effects of a bottle-weaning intervention on reducing bottle use and daily milk intake from bottles in toddlers aged 11 to 13 months in a randomized controlled trial. We show in all four two-part models that the intervention promoted bottle-weaning and reduced daily milk intake from bottles in toddlers drinking from a bottle. We also show that there are no differences in model fit using either the logit link function or the probit link function for modeling the probability of bottle-weaning in all four models. Furthermore, prediction accuracy of the logit or probit link function is not sensitive to the distribution assumption on daily milk intake from bottles in toddlers not off bottles.Keywords: two-part model, semi-continuous variable, truncated normal, gamma regression, skew normal, Pearson residual, receiver operating characteristic curve
Procedia PDF Downloads 3481071 An Approach for Association Rules Ranking
Authors: Rihab Idoudi, Karim Saheb Ettabaa, Basel Solaiman, Kamel Hamrouni
Abstract:
Medical association rules induction is used to discover useful correlations between pertinent concepts from large medical databases. Nevertheless, ARs algorithms produce huge amount of delivered rules and do not guarantee the usefulness and interestingness of the generated knowledge. To overcome this drawback, we propose an ontology based interestingness measure for ARs ranking. According to domain expert, the goal of the use of ARs is to discover implicit relationships between items of different categories such as ‘clinical features and disorders’, ‘clinical features and radiological observations’, etc. That’s to say, the itemsets which are composed of ‘similar’ items are uninteresting. Therefore, the dissimilarity between the rule’s items can be used to judge the interestingness of association rules; the more different are the items, the more interesting the rule is. In this paper, we design a distinct approach for ranking semantically interesting association rules involving the use of an ontology knowledge mining approach. The basic idea is to organize the ontology’s concepts into a hierarchical structure of conceptual clusters of targeted subjects, where each cluster encapsulates ‘similar’ concepts suggesting a specific category of the domain knowledge. The interestingness of association rules is, then, defined as the dissimilarity between corresponding clusters. That is to say, the further are the clusters of the items in the AR, the more interesting the rule is. We apply the method in our domain of interest – mammographic domain- using an existing mammographic ontology called Mammo with the goal of deriving interesting rules from past experiences, to discover implicit relationships between concepts modeling the domain.Keywords: association rule, conceptual clusters, interestingness measures, ontology knowledge mining, ranking
Procedia PDF Downloads 3211070 Development and Control of Deep Seated Gravitational Slope Deformation: The Case of Colzate-Vertova Landslide, Bergamo, Northern Italy
Authors: Paola Comella, Vincenzo Francani, Paola Gattinoni
Abstract:
This paper presents the Colzate-Vertova landslide, a Deep Seated Gravitational Slope Deformation (DSGSD) located in the Seriana Valley, Northern Italy. The paper aims at describing the development as well as evaluating the factors that influence the evolution of the landslide. After defining the conceptual model of the landslide, numerical simulations were developed using a finite element numerical model, first with a two-dimensional domain, and later with a three-dimensional one. The results of the 2-D model showed a displacement field typical of a sackung, as a consequence of the erosion along the Seriana Valley. The analysis also showed that the groundwater flow could locally affect the slope stability, bringing about a reduction in the safety factor, but without reaching failure conditions. The sensitivity analysis carried out on the strength parameters pointed out that slope failures could be reached only for relevant reduction of the geotechnical characteristics. Such a result does not fit the real conditions observed on site, where a number of small failures often develop all along the hillslope. The 3-D model gave a more comprehensive analysis of the evolution of the DSGSD, also considering the border effects. The results showed that the convex profile of the slope favors the development of displacements along the lateral valley, with a relevant reduction in the safety factor, justifying the existing landslides.Keywords: deep seated gravitational slope deformation, Italy, landslide, numerical modeling
Procedia PDF Downloads 3641069 Rheological and Computational Analysis of Crude Oil Transportation
Authors: Praveen Kumar, Satish Kumar, Jashanpreet Singh
Abstract:
Transportation of unrefined crude oil from the production unit to a refinery or large storage area by a pipeline is difficult due to the different properties of crude in various areas. Thus, the design of a crude oil pipeline is a very complex and time consuming process, when considering all the various parameters. There were three very important parameters that play a significant role in the transportation and processing pipeline design; these are: viscosity profile, temperature profile and the velocity profile of waxy crude oil through the crude oil pipeline. Knowledge of the Rheological computational technique is required for better understanding the flow behavior and predicting the flow profile in a crude oil pipeline. From these profile parameters, the material and the emulsion that is best suited for crude oil transportation can be predicted. Rheological computational fluid dynamic technique is a fast method used for designing flow profile in a crude oil pipeline with the help of computational fluid dynamics and rheological modeling. With this technique, the effect of fluid properties including shear rate range with temperature variation, degree of viscosity, elastic modulus and viscous modulus was evaluated under different conditions in a transport pipeline. In this paper, two crude oil samples was used, as well as a prepared emulsion with natural and synthetic additives, at different concentrations ranging from 1,000 ppm to 3,000 ppm. The rheological properties was then evaluated at a temperature range of 25 to 60 °C and which additive was best suited for transportation of crude oil is determined. Commercial computational fluid dynamics (CFD) has been used to generate the flow, velocity and viscosity profile of the emulsions for flow behavior analysis in crude oil transportation pipeline. This rheological CFD design can be further applied in developing designs of pipeline in the future.Keywords: surfactant, natural, crude oil, rheology, CFD, viscosity
Procedia PDF Downloads 4521068 Study and Simulation of the Thrust Vectoring in Supersonic Nozzles
Authors: Kbab H, Hamitouche T
Abstract:
In recent years, significant progress has been accomplished in the field of aerospace propulsion and propulsion systems. These developments are associated with efforts to enhance the accuracy of the analysis of aerothermodynamic phenomena in the engine. This applies in particular to the flow in the nozzles used. One of the most remarkable processes in this field is thrust vectoring by means of devices able to orientate the thrust vector and control the deflection of the exit jet in the engine nozzle. In the study proposed, we are interested in the fluid thrust vectoring using a second injection in the nozzle divergence. This fluid injection causes complex phenomena, such as boundary layer separation, which generates a shock wave in the primary jet upstream of the fluid interacting zone (primary jet - secondary jet). This will cause the deviation of the main flow, and therefore of the thrust vector with reference to the axis nozzle. In the modeling of the fluidic thrust vector, various parameters can be used. The Mach number of the primary jet and the injected fluid, the total pressures ratio, the injection rate, the thickness of the upstream boundary layer, the injector position in the divergent part, and the nozzle geometry are decisive factors in this type of phenomenon. The complexity of the latter challenges researchers to understand the physical phenomena of the turbulent boundary layer encountered in supersonic nozzles, as well as the calculation of its thickness and the friction forces induced on the walls. The present study aims to numerically simulate the thrust vectoring by secondary injection using the ANSYS-FLUENT, then to analyze and validate the results and the performances obtained (angle of deflection, efficiency...), which will then be compared with those obtained by other authors.Keywords: CD Nozzle, TVC, SVC, NPR, CFD, NPR, SPR
Procedia PDF Downloads 1321067 Modeling the Human Harbor: An Equity Project in New York City, New York USA
Authors: Lauren B. Birney
Abstract:
The envisioned long-term outcome of this three-year research, and implementation plan is for 1) teachers and students to design and build their own computational models of real-world environmental-human health phenomena occurring within the context of the “Human Harbor” and 2) project researchers to evaluate the degree to which these integrated Computer Science (CS) education experiences in New York City (NYC) public school classrooms (PreK-12) impact students’ computational-technical skill development, job readiness, career motivations, and measurable abilities to understand, articulate, and solve the underlying phenomena at the center of their models. This effort builds on the partnership’s successes over the past eight years in developing a benchmark Model of restoration-based Science, Technology, Engineering, and Math (STEM) education for urban public schools and achieving relatively broad-based implementation in the nation’s largest public school system. The Billion Oyster Project Curriculum and Community Enterprise for Restoration Science (BOP-CCERS STEM + Computing) curriculum, teacher professional developments, and community engagement programs have reached more than 200 educators and 11,000 students at 124 schools, with 84 waterfront locations and Out of School of Time (OST) programs. The BOP-CCERS Partnership is poised to develop a more refined focus on integrating computer science across the STEM domains; teaching industry-aligned computational methods and tools; and explicitly preparing students from the city’s most under-resourced and underrepresented communities for upwardly mobile careers in NYC’s ever-expanding “digital economy,” in which jobs require computational thinking and an increasing percentage require discreet computer science technical skills. Project Objectives include the following: 1. Computational Thinking (CT) Integration: Integrate computational thinking core practices across existing middle/high school BOP-CCERS STEM curriculum as a means of scaffolding toward long term computer science and computational modeling outcomes. 2. Data Science and Data Analytics: Enabling Researchers to perform interviews with Teachers, students, community members, partners, stakeholders, and Science, Technology, Engineering, and Mathematics (STEM) industry Professionals. Collaborative analysis and data collection were also performed. As a centerpiece, the BOP-CCERS partnership will expand to include a dedicated computer science education partner. New York City Department of Education (NYCDOE), Computer Science for All (CS4ALL) NYC will serve as the dedicated Computer Science (CS) lead, advising the consortium on integration and curriculum development, working in tandem. The BOP-CCERS Model™ also validates that with appropriate application of technical infrastructure, intensive teacher professional developments, and curricular scaffolding, socially connected science learning can be mainstreamed in the nation’s largest urban public school system. This is evidenced and substantiated in the initial phases of BOP-CCERS™. The BOP-CCERS™ student curriculum and teacher professional development have been implemented in approximately 24% of NYC public middle schools, reaching more than 250 educators and 11,000 students directly. BOP-CCERS™ is a fully scalable and transferable educational model, adaptable to all American school districts. In all settings of the proposed Phase IV initiative, the primary beneficiary group will be underrepresented NYC public school students who live in high-poverty neighborhoods and are traditionally underrepresented in the STEM fields, including African Americans, Latinos, English language learners, and children from economically disadvantaged households. In particular, BOP-CCERS Phase IV will explicitly prepare underrepresented students for skilled positions within New York City’s expanding digital economy, computer science, computational information systems, and innovative technology sectors.Keywords: computer science, data science, equity, diversity and inclusion, STEM education
Procedia PDF Downloads 581066 Fluid-Structure Interaction Study of Fluid Flow past Marine Turbine Blade Designed by Using Blade Element Theory and Momentum Theory
Authors: Abu Afree Andalib, M. Mezbah Uddin, M. Rafiur Rahman, M. Abir Hossain, Rajia Sultana Kamol
Abstract:
This paper deals with the analysis of flow past the marine turbine blade which is designed by using the blade element theory and momentum theory for the purpose of using in the field of renewable energy. The designed blade is analyzed for various parameters using FSI module of Ansys. Computational Fluid Dynamics is used for the study of fluid flow past the blade and other fluidic phenomena such as lift, drag, pressure differentials, energy dissipation in water. Finite Element Analysis (FEA) module of Ansys was used to analyze the structural parameter such as stress and stress density, localization point, deflection, force propagation. Fine mesh is considered in every case for more accuracy in the result according to computational machine power. The relevance of design, search and optimization with respect to complex fluid flow and structural modeling is considered and analyzed. The relevancy of design and optimization with respect to complex fluid for minimum drag force using Ansys Adjoint Solver module is analyzed as well. The graphical comparison of the above-mentioned parameter using CFD and FEA and subsequently FSI technique is illustrated and found the significant conformity between both the results.Keywords: blade element theory, computational fluid dynamics, finite element analysis, fluid-structure interaction, momentum theory
Procedia PDF Downloads 3011065 Theoretical Modeling of Self-Healing Polymers Crosslinked by Dynamic Bonds
Authors: Qiming Wang
Abstract:
Dynamic polymer networks (DPNs) crosslinked by dynamic bonds have received intensive attention because of their special crack-healing capability. Diverse DPNs have been synthesized using a number of dynamic bonds, including dynamic covalent bond, hydrogen bond, ionic bond, metal-ligand coordination, hydrophobic interaction, and others. Despite the promising success in the polymer synthesis, the fundamental understanding of their self-healing mechanics is still at the very beginning. Especially, a general analytical model to understand the interfacial self-healing behaviors of DPNs has not been established. Here, we develop polymer-network based analytical theories that can mechanistically model the constitutive behaviors and interfacial self-healing behaviors of DPNs. We consider that the DPN is composed of interpenetrating networks crosslinked by dynamic bonds. bonds obey a force-dependent chemical kinetics. During the self-healing process, we consider the The network chains follow inhomogeneous chain-length distributions and the dynamic polymer chains diffuse across the interface to reform the dynamic bonds, being modeled by a diffusion-reaction theory. The theories can predict the stress-stretch behaviors of original and self-healed DPNs, as well as the healing strength in a function of healing time. We show that the theoretically predicted healing behaviors can consistently match the documented experimental results of DPNs with various dynamic bonds, including dynamic covalent bonds (diarylbibenzofuranone and olefin metathesis), hydrogen bonds, and ionic bonds. We expect our model to be a powerful tool for the self-healing community to invent, design, understand, and optimize self-healing DPNs with various dynamic bonds.Keywords: self-healing polymers, dynamic covalent bonds, hydrogen bonds, ionic bonds
Procedia PDF Downloads 1861064 An Integration of Genetic Algorithm and Particle Swarm Optimization to Forecast Transport Energy Demand
Authors: N. R. Badurally Adam, S. R. Monebhurrun, M. Z. Dauhoo, A. Khoodaruth
Abstract:
Transport energy demand is vital for the economic growth of any country. Globalisation and better standard of living plays an important role in transport energy demand. Recently, transport energy demand in Mauritius has increased significantly, thus leading to an abuse of natural resources and thereby contributing to global warming. Forecasting the transport energy demand is therefore important for controlling and managing the demand. In this paper, we develop a model to predict the transport energy demand. The model developed is based on a system of five stochastic differential equations (SDEs) consisting of five endogenous variables: fuel price, population, gross domestic product (GDP), number of vehicles and transport energy demand and three exogenous parameters: crude birth rate, crude death rate and labour force. An interval of seven years is used to avoid any falsification of result since Mauritius is a developing country. Data available for Mauritius from year 2003 up to 2009 are used to obtain the values of design variables by applying genetic algorithm. The model is verified and validated for 2010 to 2012 by substituting the values of coefficients obtained by GA in the model and using particle swarm optimisation (PSO) to predict the values of the exogenous parameters. This model will help to control the transport energy demand in Mauritius which will in turn foster Mauritius towards a pollution-free country and decrease our dependence on fossil fuels.Keywords: genetic algorithm, modeling, particle swarm optimization, stochastic differential equations, transport energy demand
Procedia PDF Downloads 3681063 Predictive Analysis of Chest X-rays Using NLP and Large Language Models with the Indiana University Dataset and Random Forest Classifier
Authors: Azita Ramezani, Ghazal Mashhadiagha, Bahareh Sanabakhsh
Abstract:
This study researches the combination of Random. Forest classifiers with large language models (LLMs) and natural language processing (NLP) to improve diagnostic accuracy in chest X-ray analysis using the Indiana University dataset. Utilizing advanced NLP techniques, the research preprocesses textual data from radiological reports to extract key features, which are then merged with image-derived data. This improved dataset is analyzed with Random Forest classifiers to predict specific clinical results, focusing on the identification of health issues and the estimation of case urgency. The findings reveal that the combination of NLP, LLMs, and machine learning not only increases diagnostic precision but also reliability, especially in quickly identifying critical conditions. Achieving an accuracy of 99.35%, the model shows significant advancements over conventional diagnostic techniques. The results emphasize the large potential of machine learning in medical imaging, suggesting that these technologies could greatly enhance clinician judgment and patient outcomes by offering quicker and more precise diagnostic approximations.Keywords: natural language processing (NLP), large language models (LLMs), random forest classifier, chest x-ray analysis, medical imaging, diagnostic accuracy, indiana university dataset, machine learning in healthcare, predictive modeling, clinical decision support systems
Procedia PDF Downloads 421062 Presenting a Model in the Analysis of Supply Chain Management Components by Using Statistical Distribution Functions
Authors: Ramin Rostamkhani, Thurasamy Ramayah
Abstract:
One of the most important topics of today’s industrial organizations is the challenging issue of supply chain management. In this field, scientists and researchers have published numerous practical articles and models, especially in the last decade. In this research, to our best knowledge, the discussion of data modeling of supply chain management components using well-known statistical distribution functions has been considered. The world of science owns mathematics, and showing the behavior of supply chain data based on the characteristics of statistical distribution functions is innovative research that has not been published anywhere until the moment of doing this research. In an analytical process, describing different aspects of functions including probability density, cumulative distribution, reliability, and failure function can reach the suitable statistical distribution function for each of the components of the supply chain management. It can be applied to predict the behavior data of the relevant component in the future. Providing a model to adapt the best statistical distribution function in the supply chain management components will be a big revolution in the field of the behavior of the supply chain management elements in today's industrial organizations. Demonstrating the final results of the proposed model by introducing the process capability indices before and after implementing it alongside verifying the approach through the relevant assessment as an acceptable verification is a final step. The introduced approach can save the required time and cost to achieve the organizational goals. Moreover, it can increase added value in the organization.Keywords: analyzing, process capability indices, statistical distribution functions, supply chain management components
Procedia PDF Downloads 851061 Effect of Needle Height on Discharge Coefficient and Cavitation Number
Authors: Mohammadreza Nezamirad, Sepideh Amirahmadian, Nasim Sabetpour, Azadeh Yazdi, Amirmasoud Hamedi
Abstract:
Cavitation inside diesel injector nozzle is investigated using Reynolds-Stress-Navier Stokes equations. Schnerr-Sauer cavitation model is used for modeling cavitation inside diesel injector nozzle. The carrying fluid utilized in the current study is diesel fuel. The flow is verified at the beginning by comparing with the previous experimental data, and it was found that K-Epsilon turbulent model could lead to a better accuracy comparing to K-Omega turbulent model. Moreover, the mass flow rate obtained numerically is compared with the experimental value, and the discrepancy was found to be less than 5 percent which shows the accuracy of the current results. Finally, a real-size four-hole nozzle is investigated, and the flow inside it is visualized based on velocity profile, discharge coefficient, and cavitation number. It was found that the mesh density could be reduced significantly by utilizing periodic boundary conditions. Velocity contour at the mid nozzle showed that the maximum value of velocity occurs at the end of the needle before entering the orifice area. Last but not least, at the same boundary conditions, when different needle heights were utilized, it was found that as needle height increases with an increase in cavitation number, discharge coefficient increases, while the mentioned increases are more tangible at smaller values of needle heights.Keywords: cavitation, diesel fuel, CFD, real size nozzle, mass flow rate
Procedia PDF Downloads 1441060 Developing Laser Spot Position Determination and PRF Code Detection with Quadrant Detector
Authors: Mohamed Fathy Heweage, Xiao Wen, Ayman Mokhtar, Ahmed Eldamarawy
Abstract:
In this paper, we are interested in modeling, simulation, and measurement of the laser spot position with a quadrant detector. We enhance detection and tracking of semi-laser weapon decoding system based on microcontroller. The system receives the reflected pulse through quadrant detector and processes the laser pulses through a processing circuit, a microcontroller decoding laser pulse reflected by the target. The seeker accuracy will be enhanced by the decoding system, the laser detection time based on the receiving pulses number is reduced, a gate is used to limit the laser pulse width. The model is implemented based on Pulse Repetition Frequency (PRF) technique with two microcontroller units (MCU). MCU1 generates laser pulses with different codes. MCU2 decodes the laser code and locks the system at the specific code. The codes EW selected based on the two selector switches. The system is implemented and tested in Proteus ISIS software. The implementation of the full position determination circuit with the detector is produced. General system for the spot position determination was performed with the laser PRF for incident radiation and the mechanical system for adjusting system at different angles. The system test results show that the system can detect the laser code with only three received pulses based on the narrow gate signal, and good agreement between simulation and measured system performance is obtained.Keywords: four quadrant detector, pulse code detection, laser guided weapons, pulse repetition frequency (PRF), Atmega 32 microcontrollers
Procedia PDF Downloads 3861059 Normalized Enterprises Architectures: Portugal's Public Procurement System Application
Authors: Tiago Sampaio, André Vasconcelos, Bruno Fragoso
Abstract:
The Normalized Systems Theory, which is designed to be applied to software architectures, provides a set of theorems, elements and rules, with the purpose of enabling evolution in Information Systems, as well as ensuring that they are ready for change. In order to make that possible, this work’s solution is to apply the Normalized Systems Theory to the domain of enterprise architectures, using Archimate. This application is achieved through the adaptation of the elements of this theory, making them artifacts of the modeling language. The theorems are applied through the identification of the viewpoints to be used in the architectures, as well as the transformation of the theory’s encapsulation rules into architectural rules. This way, it is possible to create normalized enterprise architectures, thus fulfilling the needs and requirements of the business. This solution was demonstrated using the Portuguese Public Procurement System. The Portuguese government aims to make this system as fair as possible, allowing every organization to have the same business opportunities. The aim is for every economic operator to have access to all public tenders, which are published in any of the 6 existing platforms, independently of where they are registered. In order to make this possible, we applied our solution to the construction of two different architectures, which are able of fulfilling the requirements of the Portuguese government. One of those architectures, TO-BE A, has a Message Broker that performs the communication between the platforms. The other, TO-BE B, represents the scenario in which the platforms communicate with each other directly. Apart from these 2 architectures, we also represent the AS-IS architecture that demonstrates the current behavior of the Public Procurement Systems. Our evaluation is based on a comparison between the AS-IS and the TO-BE architectures, regarding the fulfillment of the rules and theorems of the Normalized Systems Theory and some quality metrics.Keywords: archimate, architecture, broker, enterprise, evolvable systems, interoperability, normalized architectures, normalized systems, normalized systems theory, platforms
Procedia PDF Downloads 3561058 Study of the Persian Gulf’s and Oman Sea’s Numerical Tidal Currents
Authors: Fatemeh Sadat Sharifi
Abstract:
In this research, a barotropic model was employed to consider the tidal studies in the Persian Gulf and Oman Sea, where the only sufficient force was the tidal force. To do that, a finite-difference, free-surface model called Regional Ocean Modeling System (ROMS), was employed on the data over the Persian Gulf and Oman Sea. To analyze flow patterns of the region, the results of limited size model of The Finite Volume Community Ocean Model (FVCOM) were appropriated. The two points were determined since both are one of the most critical water body in case of the economy, biology, fishery, Shipping, navigation, and petroleum extraction. The OSU Tidal Prediction Software (OTPS) tide and observation data validated the modeled result. Next, tidal elevation and speed, and tidal analysis were interpreted. Preliminary results determine a significant accuracy in the tidal height compared with observation and OTPS data, declaring that tidal currents are highest in Hormuz Strait and the narrow and shallow region between Iranian coasts and Islands. Furthermore, tidal analysis clarifies that the M_2 component has the most significant value. Finally, the Persian Gulf tidal currents are divided into two branches: the first branch converts from south to Qatar and via United Arab Emirate rotates to Hormuz Strait. The secondary branch, in north and west, extends up to the highest point in the Persian Gulf and in the head of Gulf turns counterclockwise.Keywords: numerical model, barotropic tide, tidal currents, OSU tidal prediction software, OTPS
Procedia PDF Downloads 1311057 An Approach For Evolving a Relaible Low Power Ultra Wide Band Transmitter with Capacitve Sensing
Abstract:
This work aims for a tunable capacitor as a sensor which can vary the control voltage of a voltage control oscillator in a ultra wide band (UWB) transmitter. In this paper power consumption is concentrated. The reason for choosing a capacitive sensing is it give slow temperature drift, high sensitivity and robustness. Previous works report a resistive sensing in a voltage control oscillator (VCO) not aiming at power consumption. But this work aims for power consumption of a capacitive sensing in ultra wide band transmitter. The ultra wide band transmitter to be used is a direct modulation of pulses. The VCO which is the heart of pulse generator of UWB transmitter works on the principle of voltage to frequency conversion. The VCO has and odd number of inverter stages which works on the control voltage input this input is now from a variable capacitor and the buffer stages is reduced from the previous work to maintain the oscillating frequency. The VCO is also aimed to consume low power. Then the concentration in choosing a variable capacitor is aimed. A compact model of a capacitor with the transient characteristics is to be designed with a movable dielectric and multi metal membranes. Previous modeling of the capacitor transient characteristics is with a movable membrane and a fixed membrane. This work aims at a membrane with a wide tuning suitable for ultra wide band transmitter.This is used in this work because a capacitive in a ultra wide transmitter need to be tuned in such a way that all satisfies FCC regulations.Keywords: capacitive sensing, ultra wide band transmitter, voltage control oscillator, FCC regulation
Procedia PDF Downloads 3881056 Evaluation of the Need for Seismic Retrofitting of the Foundation of a Five Story Steel Building Because of Adding of a New Story
Authors: Mohammadreza Baradaran, F. Hamzezarghani
Abstract:
Every year in different points of the world it occurs with different strengths and thousands of people lose their lives because of this natural phenomenon. One of the reasons for destruction of buildings because of earthquake in addition to the passing of time and the effect of environmental conditions and the wearing-out of a building is changing the uses of the building and change the structure and skeleton of the building. A large number of structures that are located in earthquake bearing areas have been designed according to the old quake design regulations which are out dated. In addition, many of the major earthquakes which have occurred in recent years, emphasize retrofitting to decrease the dangers of quakes. Retrofitting structural quakes available is one of the most effective methods for reducing dangers and compensating lack of resistance caused by the weaknesses existing. In this article the foundation of a five-floor steel building with the moment frame system has been evaluated for quakes and the effect of adding a floor to this five-floor steel building has been evaluated and analyzed. The considered building is with a metallic skeleton and a piled roof and clayed block which after addition of a floor has increased to a six-floor foundation of 1416 square meters, and the height of the sixth floor from ground state has increased 18.95 meters. After analysis of the foundation model, the behavior of the soil under the foundation and also the behavior of the body or element of the foundation has been evaluated and the model of the foundation and its type of change in form and the amount of stress of the soil under the foundation for some of the composition has been determined many times in the SAFE software modeling and finally the need for retrofitting of the building's foundation has been determined.Keywords: seismic, rehabilitation, steel building, foundation
Procedia PDF Downloads 2791055 Hysteresis Modeling in Iron-Dominated Magnets Based on a Deep Neural Network Approach
Authors: Maria Amodeo, Pasquale Arpaia, Marco Buzio, Vincenzo Di Capua, Francesco Donnarumma
Abstract:
Different deep neural network architectures have been compared and tested to predict magnetic hysteresis in the context of pulsed electromagnets for experimental physics applications. Modelling quasi-static or dynamic major and especially minor hysteresis loops is one of the most challenging topics for computational magnetism. Recent attempts at mathematical prediction in this context using Preisach models could not attain better than percent-level accuracy. Hence, this work explores neural network approaches and shows that the architecture that best fits the measured magnetic field behaviour, including the effects of hysteresis and eddy currents, is the nonlinear autoregressive exogenous neural network (NARX) model. This architecture aims to achieve a relative RMSE of the order of a few 100 ppm for complex magnetic field cycling, including arbitrary sequences of pseudo-random high field and low field cycles. The NARX-based architecture is compared with the state-of-the-art, showing better performance than the classical operator-based and differential models, and is tested on a reference quadrupole magnetic lens used for CERN particle beams, chosen as a case study. The training and test datasets are a representative example of real-world magnet operation; this makes the good result obtained very promising for future applications in this context.Keywords: deep neural network, magnetic modelling, measurement and empirical software engineering, NARX
Procedia PDF Downloads 1291054 Comparison of the Factor of Safety and Strength Reduction Factor Values from Slope Stability Analysis of a Large Open Pit
Authors: James Killian, Sarah Cox
Abstract:
The use of stability criteria within geotechnical engineering is the way the results of analyses are conveyed, and sensitivities and risk assessments are performed. Historically, the primary stability criteria for slope design has been the Factor of Safety (FOS) coming from a limit calculation. Increasingly, the value derived from Strength Reduction Factor (SRF) analysis is being used as the criteria for stability analysis. The purpose of this work was to study in detail the relationship between SRF values produced from a numerical modeling technique and the traditional FOS values produced from Limit Equilibrium (LEM) analyses. This study utilized a model of a 3000-foot-high slope with a 45-degree slope angle, assuming a perfectly plastic mohr-coulomb constitutive model with high cohesion and friction angle values typical of a large hard rock mine slope. A number of variables affecting the values of the SRF in a numerical analysis were tested, including zone size, in-situ stress, tensile strength, and dilation angle. This paper demonstrates that in most cases, SRF values are lower than the corresponding LEM FOS values. Modeled zone size has the greatest effect on the estimated SRF value, which can vary as much as 15% to the downside compared to FOS. For consistency when using SRF as a stability criteria, the authors suggest that numerical model zone sizes should not be constructed to be smaller than about 1% of the overall problem slope height and shouldn’t be greater than 2%. Future work could include investigations of the effect of anisotropic strength assumptions or advanced constitutive models.Keywords: FOS, SRF, LEM, comparison
Procedia PDF Downloads 3071053 A Study on the Coefficient of Transforming Relative Lateral Displacement under Linear Analysis of Structure to Its Real Relative Lateral Displacement
Authors: Abtin Farokhipanah
Abstract:
In recent years, analysis of structures is based on ductility design in contradictory to strength design in surveying earthquake effects on structures. ASCE07-10 code offers to intensify relative drifts calculated from a linear analysis with Cd which is called (Deflection Amplification Factor) to obtain the real relative drifts which can be calculated using nonlinear analysis. This lateral drift should be limited to the code boundaries. Calculation of this amplification factor for different structures, comparing with ASCE07-10 code and offering the best coefficient are the purposes of this research. Following our target, short and tall building steel structures with various earthquake resistant systems in linear and nonlinear analysis should be surveyed, so these questions will be answered: 1. Does the Response Modification Coefficient (R) have a meaningful relation to Deflection Amplification Factor? 2. Does structure height, seismic zone, response spectrum and similar parameters have an effect on the conversion coefficient of linear analysis to real drift of structure? The procedure has used to conduct this research includes: (a) Study on earthquake resistant systems, (b) Selection of systems and modeling, (c) Analyzing modeled systems using linear and nonlinear methods, (d) Calculating conversion coefficient for each system and (e) Comparing conversion coefficients with the code offered ones and concluding results.Keywords: ASCE07-10 code, deflection amplification factor, earthquake engineering, lateral displacement of structures, response modification coefficient
Procedia PDF Downloads 3531052 Preparing a Library of Abnormal Masses for Designing a Long-Lasting Anatomical Breast Phantom for Ultrasonography Training
Authors: Nasibullina A., Leonov D.
Abstract:
The ultrasonography method is actively used for the early diagnosis of various le-sions in the human body, including the mammary gland. The incidence of breast cancer has increased by more than 20%, and mortality by 14% since 2008. The correctness of the diagnosis often directly depends on the qualifications and expe-rience of a diagnostic medical sonographer. That is why special attention should be paid to the practical training of future specialists. Anatomical phantoms are ex-cellent teaching tools because they accurately imitate the characteristics of real hu-man tissues and organs. The purpose of this work is to create a breast phantom for practicing ultrasound diagnostic skills in grayscale and elastography imaging, as well as ultrasound-guided biopsy sampling. We used silicone-like compounds ranging from 3 to 17 on the Shore scale hardness units to simulate soft tissue and lesions. Impurities with experimentally selected concentrations were added to give the phantom the necessary attenuation and reflection parameters. We used 3D modeling programs and 3D printing with PLA plastic to create the casting mold. We developed a breast phantom with inclusions of varying shape, elasticity and echogenicity. After testing the created phantom in B-mode and elastography mode, we performed a survey asking 19 participants how realistic the sonograms of the phantom were. The results showed that the closest to real was the model of the cyst with 9.5 on the 0-10 similarity scale. Thus, the developed breast phantom can be used for ultrasonography, elastography, and ultrasound-guided biopsy training.Keywords: breast ultrasound, mammary gland, mammography, training phantom, tissue-mimicking materials
Procedia PDF Downloads 911051 Airline Choice Model for Domestic Flights: The Role of Airline Flexibility
Authors: Camila Amin-Puello, Lina Vasco-Diaz, Juan Ramirez-Arias, Claudia Munoz, Carlos Gonzalez-Calderon
Abstract:
Operational flexibility is a fundamental aspect in the field of airlines because although demand is constantly changing, it is the duty of companies to provide a service to users that satisfies their needs in an efficient manner without sacrificing factors such as comfort, safety and other perception variables. The objective of this research is to understand the factors that describe and explain operational flexibility by implementing advanced analytical methods such as exploratory factor analysis and structural equation modeling, examining multiple levels of operational flexibility and understanding how these variable influences users' decision-making when choosing an airline and in turn how it affects the airlines themselves. The use of a hybrid model and latent variables improves the efficiency and accuracy of airline performance prediction in the unpredictable Colombian market. This pioneering study delves into traveler motivations and their impact on domestic flight demand, offering valuable insights to optimize resources and improve the overall traveler experience. Applying the methods, it was identified that low-cost airlines are not useful for flexibility, while users, especially women, found airlines with greater flexibility in terms of ticket costs and flight schedules to be more useful. All of this allows airlines to anticipate and adapt to their customers' needs efficiently: to plan flight capacity appropriately, adjust pricing strategies and improve the overall passenger experience.Keywords: hybrid choice model, airline, business travelers, domestic flights
Procedia PDF Downloads 101050 Geo-Collaboration Model between a City and Its Inhabitants to Develop Complementary Solutions for Better Household Waste Collection
Authors: Abdessalam Hijab, Hafida Boulekbache, Eric Henry
Abstract:
According to several research studies, the city as a whole is a complex, spatially organized system; its modeling must take into account several factors, socio-economic, and political, or geographical, acting at multiple scales of observation according to varied temporalities. Sustainable management and protection of the environment in this complex system require significant human and technical investment, particularly for monitoring and maintenance. The objective of this paper is to propose an intelligent approach based on the coupling of Geographic Information System (GIS) and Information and Communications Technology (ICT) tools in order to integrate the inhabitants in the processes of sustainable management and protection of the urban environment, specifically in the processes of household waste collection in urban areas. We are discussing a collaborative 'city/inhabitant' space. Indeed, it is a geo-collaborative approach, based on the spatialization and real-time geo-localization of topological and multimedia data taken by the 'active' inhabitant, in the form of geo-localized alerts related to household waste issues in their city. Our proposal provides a good understanding of the extent to which civil society (inhabitants) can help and contribute to the development of complementary solutions for the collection of household waste and the protection of the urban environment. Moreover, it allows the inhabitant to contribute to the enrichment of a data bank for future uses. Our geo-collaborative model will be tested in the Lamkansa sampling district of the city of Casablanca in Morocco.Keywords: geographic information system, GIS, information and communications technology, ICT, geo-collaboration, inhabitants, city
Procedia PDF Downloads 1141049 Modelling Heat Transfer Characteristics in the Pasteurization Process of Medium Long Necked Bottled Beers
Authors: S. K. Fasogbon, O. E. Oguegbu
Abstract:
Pasteurization is one of the most important steps in the preservation of beer products, which improves its shelf life by inactivating almost all the spoilage organisms present in it. However, there is no gain saying the fact that it is always difficult to determine the slowest heating zone, the temperature profile and pasteurization units inside bottled beer during pasteurization, hence there had been significant experimental and ANSYS fluent approaches on the problem. This work now developed Computational fluid dynamics model using COMSOL Multiphysics. The model was simulated to determine the slowest heating zone, temperature profile and pasteurization units inside the bottled beer during the pasteurization process. The results of the simulation were compared with the existing data in the literature. The results showed that, the location and size of the slowest heating zone is dependent on the time-temperature combination of each zone. The results also showed that the temperature profile of the bottled beer was found to be affected by the natural convection resulting from variation in density during pasteurization process and that the pasteurization unit increases with time subject to the temperature reached by the beer. Although the results of this work agreed with literatures in the aspects of slowest heating zone and temperature profiles, the results of pasteurization unit however did not agree. It was suspected that this must have been greatly affected by the bottle geometry, specific heat capacity and density of the beer in question. The work concludes that for effective pasteurization to be achieved, there is a need to optimize the spray water temperature and the time spent by the bottled product in each of the pasteurization zones.Keywords: modeling, heat transfer, temperature profile, pasteurization process, bottled beer
Procedia PDF Downloads 2021048 Power Production Performance of Different Wave Energy Converters in the Southwestern Black Sea
Authors: Ajab G. Majidi, Bilal Bingölbali, Adem Akpınar
Abstract:
This study aims to investigate the amount of energy (economic wave energy potential) that can be obtained from the existing wave energy converters in the high wave energy potential region of the Black Sea in terms of wave energy potential and their performance at different depths in the region. The data needed for this purpose were obtained using the calibrated nested layered SWAN wave modeling program version 41.01AB, which was forced with Climate Forecast System Reanalysis (CFSR) winds from 1979 to 2009. The wave dataset at a time interval of 2 hours was accumulated for a sub-grid domain for around Karaburun beach in Arnavutkoy, a district of Istanbul city. The annual sea state characteristic matrices for the five different depths along with a vertical line to the coastline were calculated for 31 years. According to the power matrices of different wave energy converter systems and characteristic matrices for each possible installation depth, the probability distribution tables of the specified mean wave period or wave energy period and significant wave height were calculated. Then, by using the relationship between these distribution tables, according to the present wave climate, the energy that the wave energy converter systems at each depth can produce was determined. Thus, the economically feasible potential of the relevant coastal zone was revealed, and the effect of different depths on energy converter systems is presented. The Oceantic at 50, 75 and 100 m depths and Oyster at 5 and 25 m depths presents the best performance. In the 31-year long period 1998 the most and 1989 is the least dynamic year.Keywords: annual power production, Black Sea, efficiency, power production performance, wave energy converter
Procedia PDF Downloads 1311047 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material
Authors: S. Boria
Abstract:
In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.Keywords: composite material, crashworthiness, finite element analysis, optimization
Procedia PDF Downloads 2531046 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes
Authors: Jihad Daba, Jean-Pierre Dubois
Abstract:
Multi path fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper have utilized a Poisson modulated and weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multi-diversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent specular Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.Keywords: cellular communication, femto and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process
Procedia PDF Downloads 446