Search results for: comprehensive model
16650 Assessment of High Frequency Solidly Mounted Resonator as Viscosity Sensor
Authors: Vinita Choudhary
Abstract:
Solidly Acoustic Resonators (SMR) based on ZnO piezoelectric material operating at a frequency of 3.96 GHz and 6.49% coupling factor are used to characterize liquids with different viscosities. This behavior of the sensor is analyzed using Finite Element Modeling. Device architectures encapsulate bulk acoustic wave resonators with MO/SiO₂ Bragg mirror reflector and the silicon substrate. The proposed SMR is based on the mass loading effect response of the sensor to the change in the resonant frequency of the resonator that is caused by the increased density due to the absorption of liquids (water, acetone, olive oil) used in theoretical calculation. The sensitivity of sensors ranges from 0.238 MHz/mPa.s to 83.33 MHz/mPa.s, supported by the Kanazawa model. Obtained results are also compared with previous works on BAW viscosity sensors.Keywords: solidly mounted resonator, bragg mirror, kanazawa model, finite element model
Procedia PDF Downloads 8316649 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures
Authors: Adriano Z. Zambom, Preethi Ravikumar
Abstract:
One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria
Procedia PDF Downloads 26616648 Simulation of Nonlinear Behavior of Reinforced Concrete Slabs Using Rigid Body-Spring Discrete Element Method
Authors: Felix Jr. Garde, Eric Augustus Tingatinga
Abstract:
Most analysis procedures of reinforced concrete (RC) slabs are based on elastic theory. When subjected to large forces, however, slabs deform beyond elastic range and the study of their behavior and performance require nonlinear analysis. This paper presents a numerical model to simulate nonlinear behavior of RC slabs using rigid body-spring discrete element method. The proposed slab model composed of rigid plate elements and nonlinear springs is based on the yield line theory which assumes that the nonlinear behavior of the RC slab subjected to transverse loads is contained in plastic or yield-lines. In this model, the displacement of the slab is completely described by the rigid elements and the deformation energy is concentrated in the flexural springs uniformly distributed at the potential yield lines. The spring parameters are determined from comparison of transverse displacements and stresses developed in the slab obtained using FEM and the proposed model with assumed homogeneous material. Numerical models of typical RC slabs with varying geometry, reinforcement, support conditions, and loading conditions, show reasonable agreement with available experimental data. The model was also shown to be useful in investigating dynamic behavior of slabs.Keywords: RC slab, nonlinear behavior, yield line theory, rigid body-spring discrete element method
Procedia PDF Downloads 32516647 Improved Structure and Performance by Shape Change of Foam Monitor
Authors: Tae Gwan Kim, Hyun Kyu Cho, Young Hoon Lee, Young Chul Park
Abstract:
Foam monitors are devices that are installed on cargo tank decks to suppress cargo area fires in oil tankers or hazardous chemical ship cargo ships. In general, the main design parameter of the foam monitor is the distance of the projection through the foam monitor. In this study, the relationship between flow characteristics and projection distance, depending on the shape was examined. Numerical techniques for fluid analysis of foam monitors have been developed for prediction. The flow pattern of the fluid varies depending on the shape of the flow path of the foam monitor, as the flow losses affecting projection distance were calculated through numerical analysis. The basic shape of the foam monitor was an L shape designed by N Company. The modified model increased the length of the flow path and used the S shape model. The calculation result shows that the L shape, which is the basic shape, has a problem that the force is directed to one side and the vibration and noise are generated there. In order to solve the problem, S-shaped model, which is a change model, was used. As a result, the problem is solved, and the projection distance from the nozzle is improved.Keywords: CFD, foam monitor, projection distance, moment
Procedia PDF Downloads 34516646 Application of Model Free Adaptive Control in Main Steam Temperature System of Thermal Power Plant
Authors: Khaing Yadana Swe, Lillie Dewan
Abstract:
At present, the cascade PID control is widely used to control the super-heating temperature (main steam temperature). As the main steam temperature has the characteristics of large inertia, large time-delay, and time varying, etc., conventional PID control strategy can not achieve good control performance. In order to overcome the bad performance and deficiencies of main steam temperature control system, Model Free Adaptive Control (MFAC) P cascade control system is proposed in this paper. By substituting MFAC in PID of the main control loop of the main steam temperature control, it can overcome time delays, non-linearity, disturbance and time variation.Keywords: model-free adaptive control, cascade control, adaptive control, PID
Procedia PDF Downloads 60316645 Constructing Service Innovation Model for SMEs in Automotive Service Industries: A Case Study of Auto Repair Motorcycle in Makassar City
Authors: Muhammad Farid, Jen Der Day
Abstract:
The purpose of this study is to explore the construct of service innovation model for Small and medium-sized enterprises (SMEs) in automotive service industries. A case study of repair shop of the motorcycle at Makassar city illustrates measure innovation implementation, the degree of innovation, and identifies the type of innovation by the service innovation model for SMEs. In this paper, we interview 10 managers of SMEs and analyze their answers. We find that innovation implementation has been slowly; only producing new service innovation 0.62 unit average per year. Incremental innovation is the present option for SMEs, because they choose safer roads to improve service continuously. If want to create radical innovation, they still consider the aspect of cost, system, and readiness of human resources.Keywords: service innovation, incremental innovation, SMEs, automotive service industries
Procedia PDF Downloads 36116644 AI-Powered Conversation Tools - Chatbots: Opportunities and Challenges That Present to Academics within Higher Education
Authors: Jinming Du
Abstract:
With the COVID-19 pandemic beginning in 2020, many higher education institutions and education systems are turning to hybrid or fully distance online courses to maintain social distance and provide a safe virtual space for learning and teaching. However, the majority of faculty members were not well prepared for the shift to blended or distance learning. Communication frustrations are prevalent in both hybrid and full-distance courses. A systematic literature review was conducted by a comprehensive analysis of 1688 publications that focused on the application of the adoption of chatbots in education. This study aimed to explore instructors' experiences with chatbots in online and blended undergraduate English courses. Language learners are overwhelmed by the variety of information offered by many online sites. The recently emerged chatbots (e.g.: ChatGPT) are slightly superior in performance as compared to those traditional through previous technologies such as tapes, video recorders, and websites. The field of chatbots has been intensively researched, and new methods have been developed to demonstrate how students can best learn and practice a new language in the target language. However, it is believed that among the many areas where chatbots are applied, while chatbots have been used as effective tools for communicating with business customers, in consulting and targeting areas, and in the medical field, chatbots have not yet been fully explored and implemented in the field of language education. This issue is challenging enough for language teachers; they need to study and conduct research carefully to clarify it. Pedagogical chatbots may alleviate the perception of a lack of communication and feedback from instructors by interacting naturally with students through scaffolding the understanding of those learners, much like educators do. However, educators and instructors lack the proficiency to effectively operate this emerging AI chatbot technology and require comprehensive study or structured training to attain competence. There is a gap between language teachers’ perceptions and recent advances in the application of AI chatbots to language learning. The results of the study found that although the teachers felt that the chatbots did the best job of giving feedback, the teachers needed additional training to be able to give better instructions and to help them assist in teaching. Teachers generally perceive the utilization of chatbots to offer substantial assistance to English language instruction.Keywords: artificial intelligence in education, chatbots, education and technology, education system, pedagogical chatbot, chatbots and language education
Procedia PDF Downloads 6816643 Proposition Model of Micromechanical Damage to Predict Reduction in Stiffness of a Fatigued A-SMC Composite
Authors: Houssem Ayari
Abstract:
Sheet molding compounds (SMC) are high strength thermoset moulding materials reinforced with glass treated with thermocompression. SMC composites combine fibreglass resins and polyester/phenolic/vinyl and unsaturated acrylic to produce a high strength moulding compound. These materials are usually formulated to meet the performance requirements of the moulding part. In addition, the vinyl ester resins used in the new advanced SMC systems (A-SMC) have many desirable features, including mechanical properties comparable to epoxy, excellent chemical resistance and tensile resistance, and cost competitiveness. In this paper, a proposed model is used to take into account the Young modulus evolutions of advanced SMC systems (A-SMC) composite under fatigue tests. The proposed model and the used approach are in good agreement with the experimental results.Keywords: composites SFRC, damage, fatigue, Mori-Tanaka
Procedia PDF Downloads 11916642 Human Performance Technology (HPT) as an Entry Point to Achieve Organizational Development in Educational Institutions of the Ministry of Education
Authors: Alkhathlan Mansour
Abstract:
Current research aims at achieving the organizational development in the educational institutions in the governorate of Al-Kharj through the human performance technology (HPT) model that is named; “The Intellectual Model to improve human performance”. To achieve the goal of this research, it tools -that it is consisting of targeted questionnaires to research sample numbered (120)- have been set up. This sample is represented in; department managers in Prince Sattam Bin Abdulaziz University (50), educational supervisors in the Department of Education (40), school administrators in the governorate (30), and the views of education experts through personal interviews in the proposal to achieve organizational development through the intellectual model to improve human performance. Among the most important research results is that there are many obstacles prevent the organizational development in the educational institutions, so the research suggested a model to achieve organizational development through human performance technologies, as well as the researcher recommended through the results of his research that the administrators have to take into account the justice in the distribution of incentives to employees of educational institutions and training leaders in educational institutions on organizational development strategies and working on the preparation of experts of organizational development in the educational institutions to develop the necessary policies and procedures of each institution.Keywords: human performance, development, education, organizational
Procedia PDF Downloads 29016641 Revisiting Historical Illustrations in the Age of Digital Anatomy Education
Authors: Julia Wimmers-Klick
Abstract:
In the contemporary study of anatomy, medical students utilize a diverse array of resources, including lab handouts, lectures, and, increasingly, digital media such as interactive anatomy apps and digital images. Notably, a significant shift has occurred, with fewer students possessing traditional anatomy atlases or books, reflecting a broader trend towards digital approaches like Virtual Reality, Augmented Reality, and web-based programs. This paper seeks to explore the evolution of anatomy education by contrasting current digital tools with historical resources, such as classical anatomical illustrations and atlases, to assess their relevance and potential benefits in modern medical education. Through a comprehensive literature review, the development of anatomical illustrations is traced from the textual descriptions of Galen to the detailed and artistic representations of Da Vinci, Vesalius, and later anatomists. The examination includes how the printing press facilitated the dissemination of anatomical knowledge, transforming covert dissections into public spectacles and formalized teaching practices. Historical illustrations, often influenced by societal, religious, and aesthetic contexts, not only served educational purposes but also reflected the prevailing medical knowledge and ethical standards of their times. Critical questions are raised about the place of historical illustrations in today's anatomy curriculum. Specifically, their potential to teach critical thinking, highlight the history of medicine, and offer unique insights into past societal conditions are explored. These resources are viewed in their context, including the lack of diversity and the presence of ethical concerns, such as the use of illustrations from unethical sources like Pernkopf’s atlas. In conclusion, while digital tools offer innovative ways to visualize and interact with anatomical structures, historical illustrations provide irreplaceable value in understanding the evolution of medical knowledge and practice. The study advocates for a balanced approach that integrates traditional and modern resources to enrich medical education, promote critical thinking, and provide a comprehensive understanding of anatomy. Future research should investigate the optimal combination of these resources to meet the evolving needs of medical learners and the implications of the digital shift in anatomy education.Keywords: human anatomy, historical illustrations, historical context, medical education
Procedia PDF Downloads 2516640 In vitro Skin Model for Enhanced Testing of Antimicrobial Textiles
Authors: Steven Arcidiacono, Robert Stote, Erin Anderson, Molly Richards
Abstract:
There are numerous standard test methods for antimicrobial textiles that measure activity against specific microorganisms. However, many times these results do not translate to the performance of treated textiles when worn by individuals. Standard test methods apply a single target organism grown under optimal conditions to a textile, then recover the organism to quantitate and determine activity; this does not reflect the actual performance environment that consists of polymicrobial communities in less than optimal conditions or interaction of the textile with the skin substrate. Here we propose the development of in vitro skin model method to bridge the gap between lab testing and wear studies. The model will consist of a defined polymicrobial community of 5-7 commensal microbes simulating the skin microbiome, seeded onto a solid tissue platform to represent the skin. The protocol would entail adding a non-commensal test organism of interest to the defined community and applying a textile sample to the solid substrate. Following incubation, the textile would be removed and the organisms recovered, which would then be quantitated to determine antimicrobial activity. Important parameters to consider include identification and assembly of the defined polymicrobial community, growth conditions to allow the establishment of a stable community, and choice of skin surrogate. This model could answer the following questions: 1) is the treated textile effective against the target organism? 2) How is the defined community affected? And 3) does the textile cause unwanted effects toward the skin simulant? The proposed model would determine activity under conditions comparable to the intended application and provide expanded knowledge relative to current test methods.Keywords: antimicrobial textiles, defined polymicrobial community, in vitro skin model, skin microbiome
Procedia PDF Downloads 14016639 Predicting Radioactive Waste Glass Viscosity, Density and Dissolution with Machine Learning
Authors: Joseph Lillington, Tom Gout, Mike Harrison, Ian Farnan
Abstract:
The vitrification of high-level nuclear waste within borosilicate glass and its incorporation within a multi-barrier repository deep underground is widely accepted as the preferred disposal method. However, for this to happen, any safety case will require validation that the initially localized radionuclides will not be considerably released into the near/far-field. Therefore, accurate mechanistic models are necessary to predict glass dissolution, and these should be robust to a variety of incorporated waste species and leaching test conditions, particularly given substantial variations across international waste-streams. Here, machine learning is used to predict glass material properties (viscosity, density) and glass leaching model parameters from large-scale industrial data. A variety of different machine learning algorithms have been compared to assess performance. Density was predicted solely from composition, whereas viscosity additionally considered temperature. To predict suitable glass leaching model parameters, a large simulated dataset was created by coupling MATLAB and the chemical reactive-transport code HYTEC, considering the state-of-the-art GRAAL model (glass reactivity in allowance of the alteration layer). The trained models were then subsequently applied to the large-scale industrial, experimental data to identify potentially appropriate model parameters. Results indicate that ensemble methods can accurately predict viscosity as a function of temperature and composition across all three industrial datasets. Glass density prediction shows reliable learning performance with predictions primarily being within the experimental uncertainty of the test data. Furthermore, machine learning can predict glass dissolution model parameters behavior, demonstrating potential value in GRAAL model development and in assessing suitable model parameters for large-scale industrial glass dissolution data.Keywords: machine learning, predictive modelling, pattern recognition, radioactive waste glass
Procedia PDF Downloads 11716638 The Strategy of Teaching Digital Art in Classroom as a Way of Enhancing Pupils’ Artistic Creativity
Authors: Aber Salem Aboalgasm, Rupert Ward
Abstract:
Teaching art by digital means is a big challenge for the majority of teachers of art and artistic design courses in primary education schools. These courses can clearly identify relationships between art, technology and creativity in the classroom .The aim of this article is to present a modern way of teaching art, using digital tools in the art classroom in order to improve creative ability in pupils aged between 9 and 11 years; it also presents a conceptual model for creativity based on digital art. The model could be useful for pupils interested in learning drawing and using an e-drawing package, and for teachers who are interested in teaching their students modern digital art, and improving children’s creativity. This model is designed to show the strategy of teaching art through technology, in order for children to learn how to be creative. This will also help education providers to make suitable choices about which technological approaches they should choose to teach students and enhance their creative ability. To define the digital art tools that can benefit children develop their technical skills. It is also expected that use of this model will help to develop social interactive qualities that may improve intellectual ability.Keywords: digital tools, motivation, creative activity, technical skill
Procedia PDF Downloads 46416637 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility
Authors: Fu Jinyu, Lin Jinguan
Abstract:
This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate
Procedia PDF Downloads 16116636 Accurate HLA Typing at High-Digit Resolution from NGS Data
Authors: Yazhi Huang, Jing Yang, Dingge Ying, Yan Zhang, Vorasuk Shotelersuk, Nattiya Hirankarn, Pak Chung Sham, Yu Lung Lau, Wanling Yang
Abstract:
Human leukocyte antigen (HLA) typing from next generation sequencing (NGS) data has the potential for applications in clinical laboratories and population genetic studies. Here we introduce a novel technique for HLA typing from NGS data based on read-mapping using a comprehensive reference panel containing all known HLA alleles and de novo assembly of the gene-specific short reads. An accurate HLA typing at high-digit resolution was achieved when it was tested on publicly available NGS data, outperforming other newly-developed tools such as HLAminer and PHLAT.Keywords: human leukocyte antigens, next generation sequencing, whole exome sequencing, HLA typing
Procedia PDF Downloads 66816635 Evaluation of Model-Based Code Generation for Embedded Systems–Mature Approach for Development in Evolution
Authors: Nikolay P. Brayanov, Anna V. Stoynova
Abstract:
Model-based development approach is gaining more support and acceptance. Its higher abstraction level brings simplification of systems’ description that allows domain experts to do their best without particular knowledge in programming. The different levels of simulation support the rapid prototyping, verifying and validating the product even before it exists physically. Nowadays model-based approach is beneficial for modelling of complex embedded systems as well as a generation of code for many different hardware platforms. Moreover, it is possible to be applied in safety-relevant industries like automotive, which brings extra automation of the expensive device certification process and especially in the software qualification. Using it, some companies report about cost savings and quality improvements, but there are others claiming no major changes or even about cost increases. This publication demonstrates the level of maturity and autonomy of model-based approach for code generation. It is based on a real live automotive seat heater (ASH) module, developed using The Mathworks, Inc. tools. The model, created with Simulink, Stateflow and Matlab is used for automatic generation of C code with Embedded Coder. To prove the maturity of the process, Code generation advisor is used for automatic configuration. All additional configuration parameters are set to auto, when applicable, leaving the generation process to function autonomously. As a result of the investigation, the publication compares the quality of generated embedded code and a manually developed one. The measurements show that generally, the code generated by automatic approach is not worse than the manual one. A deeper analysis of the technical parameters enumerates the disadvantages, part of them identified as topics for our future work.Keywords: embedded code generation, embedded C code quality, embedded systems, model-based development
Procedia PDF Downloads 24516634 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.Keywords: diesel engine, machine learning, NOₓ emission, semi-empirical
Procedia PDF Downloads 11516633 Unsteady Rayleigh-Bénard Convection of Nanoliquids in Enclosures
Authors: P. G. Siddheshwar, B. N. Veena
Abstract:
Rayleigh-B´enard convection of a nanoliquid in shallow, square and tall enclosures is studied using the Khanafer-Vafai-Lightstone single-phase model. The thermophysical properties of water, copper, copper-oxide, alumina, silver and titania at 3000 K under stagnant conditions that are collected from literature are used in calculating thermophysical properties of water-based nanoliquids. Phenomenological laws and mixture theory are used for calculating thermophysical properties. Free-free, rigid-rigid and rigid-free boundary conditions are considered in the study. Intractable Lorenz model for each boundary combination is derived and then reduced to the tractable Ginzburg-Landau model. The amplitude thus obtained is used to quantify the heat transport in terms of Nusselt number. Addition of nanoparticles is shown not to alter the influence of the nature of boundaries on the onset of convection as well as on heat transport. Amongst the three enclosures considered, it is found that tall and shallow enclosures transport maximum and minimum energy respectively. Enhancement of heat transport due to nanoparticles in the three enclosures is found to be in the range 3% - 11%. Comparison of results in the case of rigid-rigid boundaries is made with those of an earlier work and good agreement is found. The study has limitations in the sense that thermophysical properties are calculated by using various quantities modelled for static condition.Keywords: enclosures, free-free, rigid-rigid, rigid-free boundaries, Ginzburg-Landau model, Lorenz model
Procedia PDF Downloads 25716632 Evaluation of Turbulence Prediction over Washington, D.C.: Comparison of DCNet Observations and North American Mesoscale Model Outputs
Authors: Nebila Lichiheb, LaToya Myles, William Pendergrass, Bruce Hicks, Dawson Cagle
Abstract:
Atmospheric transport of hazardous materials in urban areas is increasingly under investigation due to the potential impact on human health and the environment. In response to health and safety concerns, several dispersion models have been developed to analyze and predict the dispersion of hazardous contaminants. The models of interest usually rely on meteorological information obtained from the meteorological models of NOAA’s National Weather Service (NWS). However, due to the complexity of the urban environment, NWS forecasts provide an inadequate basis for dispersion computation in urban areas. A dense meteorological network in Washington, DC, called DCNet, has been operated by NOAA since 2003 to support the development of urban monitoring methodologies and provide the driving meteorological observations for atmospheric transport and dispersion models. This study focuses on the comparison of wind observations from the DCNet station on the U.S. Department of Commerce Herbert C. Hoover Building against the North American Mesoscale (NAM) model outputs for the period 2017-2019. The goal is to develop a simple methodology for modifying NAM outputs so that the dispersion requirements of the city and its urban area can be satisfied. This methodology will allow us to quantify the prediction errors of the NAM model and propose adjustments of key variables controlling dispersion model calculation.Keywords: meteorological data, Washington D.C., DCNet data, NAM model
Procedia PDF Downloads 23516631 Development on the Modeling Driven Architecture
Authors: Sahar Shahsavaripour Ghazanfarpour
Abstract:
As our daily life depends on quality of built services by systems and using devices in our environment; so education and model of software′s quality will be so important. By daily growth in software′s systems and using them so much, progressing process and requirements′ evaluation in primary level of progress especially architecture level in software get more important. Modern driver architecture changes an in dependent model of a level into some specific models that their purpose is reducing number of software changes into an executive model. Process of designing software engineering is mid-automated. The needed quality attribute in designing architecture and quality attribute in representation are in architecture models. The main problem is the relationship between needs, and elements in some aspect with implicit models and input sources in process. It’s because there is no detection ability. The MART profile is use to describe real-time properties and perform plat form modeling.Keywords: MDA, DW, OMG, UML, AKB, software architecture, ontology, evaluation
Procedia PDF Downloads 49716630 Simulation Model of Induction Heating in COMSOL Multiphysics
Authors: K. Djellabi, M. E. H. Latreche
Abstract:
The induction heating phenomenon depends on various factors, making the problem highly nonlinear. The mathematical analysis of this problem in most cases is very difficult and it is reduced to simple cases. Another knowledge of induction heating systems is generated in production environments, but these trial-error procedures are long and expensive. The numerical models of induction heating problem are another approach to reduce abovementioned drawbacks. This paper deals with the simulation model of induction heating problem. The simulation model of induction heating system in COMSOL Multiphysics is created. In this work we present results of numerical simulations of induction heating process in pieces of cylindrical shapes, in an inductor with four coils. The modeling of the inducting heating process was made with the software COMSOL Multiphysics Version 4.2a, for the study we present the temperature charts.Keywords: induction heating, electromagnetic field, inductor, numerical simulation, finite element
Procedia PDF Downloads 31616629 Comparison of Johnson-Cook and Barlat Material Model for 316L Stainless Steel
Authors: Yiğit Gürler, İbrahim Şimşek, Müge Savaştaer, Ayberk Karakuş, Alper Taşdemirci
Abstract:
316L steel is frequently used in the industry due to its easy formability and accessibility in sheet metal forming processes. Numerical and experimental studies are frequently encountered in the literature to examine the mechanical behavior of 316L stainless steel during the forming process. 316L stainless steel is the most common material used in the production of plate heat exchangers and plate heat exchangers are produced by plastic deformation of the stainless steel. The motivation in this study is to determine the appropriate material model during the simulation of the sheet metal forming process. For this reason, two different material models were examined and Ls-Dyna material cards were created using material test data. These are MAT133_BARLAT_YLD2000 and MAT093_SIMPLIFIED_JOHNSON_COOK. In order to compare results of the tensile test & hydraulic bulge test performed both numerically and experimentally. The obtained results were evaluated comparatively and the most suitable material model was selected for the forming simulation. In future studies, this material model will be used in the numerical modeling of the sheet metal forming process.Keywords: 316L, mechanical characterization, metal forming, Ls-Dyna
Procedia PDF Downloads 33616628 Comparative Analysis of Dissimilarity Detection between Binary Images Based on Equivalency and Non-Equivalency of Image Inversion
Authors: Adnan A. Y. Mustafa
Abstract:
Image matching is a fundamental problem that arises frequently in many aspects of robot and computer vision. It can become a time-consuming process when matching images to a database consisting of hundreds of images, especially if the images are big. One approach to reducing the time complexity of the matching process is to reduce the search space in a pre-matching stage, by simply removing dissimilar images quickly. The Probabilistic Matching Model for Binary Images (PMMBI) showed that dissimilarity detection between binary images can be accomplished quickly by random pixel mapping and is size invariant. The model is based on the gamma binary similarity distance that recognizes an image and its inverse as containing the same scene and hence considers them to be the same image. However, in many applications, an image and its inverse are not treated as being the same but rather dissimilar. In this paper, we present a comparative analysis of dissimilarity detection between PMMBI based on the gamma binary similarity distance and a modified PMMBI model based on a similarity distance that does distinguish between an image and its inverse as being dissimilar.Keywords: binary image, dissimilarity detection, probabilistic matching model for binary images, image mapping
Procedia PDF Downloads 15616627 Probabilistic Graphical Model for the Web
Authors: M. Nekri, A. Khelladi
Abstract:
The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.Keywords: clustering coefficient, preferential attachment, small world, web community
Procedia PDF Downloads 27216626 Application of Data Mining Techniques for Tourism Knowledge Discovery
Authors: Teklu Urgessa, Wookjae Maeng, Joong Seek Lee
Abstract:
Application of five implementations of three data mining classification techniques was experimented for extracting important insights from tourism data. The aim was to find out the best performing algorithm among the compared ones for tourism knowledge discovery. Knowledge discovery process from data was used as a process model. 10-fold cross validation method is used for testing purpose. Various data preprocessing activities were performed to get the final dataset for model building. Classification models of the selected algorithms were built with different scenarios on the preprocessed dataset. The outperformed algorithm tourism dataset was Random Forest (76%) before applying information gain based attribute selection and J48 (C4.5) (75%) after selection of top relevant attributes to the class (target) attribute. In terms of time for model building, attribute selection improves the efficiency of all algorithms. Artificial Neural Network (multilayer perceptron) showed the highest improvement (90%). The rules extracted from the decision tree model are presented, which showed intricate, non-trivial knowledge/insight that would otherwise not be discovered by simple statistical analysis with mediocre accuracy of the machine using classification algorithms.Keywords: classification algorithms, data mining, knowledge discovery, tourism
Procedia PDF Downloads 29616625 A Literature Review of the Trend towards Indoor Dynamic Thermal Comfort
Authors: James Katungyi
Abstract:
The Steady State thermal comfort model which dominates thermal comfort practice and which posits the ideal thermal conditions in a narrow range of thermal conditions does not deliver the expected comfort levels among occupants. Furthermore, the buildings where this model is applied consume a lot of energy in conditioning. This paper reviews significant literature about thermal comfort in dynamic indoor conditions including the adaptive thermal comfort model and alliesthesia. A major finding of the paper is that the adaptive thermal comfort model is part of a trend from static to dynamic indoor environments in aspects such as lighting, views, sounds and ventilation. Alliesthesia or thermal delight is consistent with this trend towards dynamic thermal conditions. It is within this trend that the two fold goal of increased thermal comfort and reduced energy consumption lies. At the heart of this trend is a rediscovery of the link between the natural environment and human well-being, a link that was partially severed by over-reliance on mechanically dominated artificial indoor environments. The paper concludes by advocating thermal conditioning solutions that integrate mechanical with natural thermal conditioning in a balanced manner in order to meet occupant thermal needs without endangering the environment.Keywords: adaptive thermal comfort, alliesthesia, energy, natural environment
Procedia PDF Downloads 22016624 Design Approach for the Development of Format-Flexible Packaging Machines
Authors: G. Götz, P. Stich, J. Backhaus, G. Reinhart
Abstract:
The rising demand for format-flexible packaging machines is caused by current market changes. Increasing the formatflexibility is a new goal for the packaging machine manufacturers’ product development process. There are no methodical or designorientated tools for a comprehensive consideration of this target. This paper defines the term format-flexibility in the context of packaging machines and shows the state-of-the-art for improving the changeover of production machines. The requirements for a new approach and the concept itself will be introduced, and the method elements will be explained. Finally, the use of the concept and the result of the development of a format-flexible packaging machine will be shown.Keywords: packaging machine, format-flexibility, changeover, design method
Procedia PDF Downloads 43616623 Stress and Strain Analysis of Notched Bodies Subject to Non-Proportional Loadings
Authors: Ayhan Ince
Abstract:
In this paper, an analytical simplified method for calculating elasto-plastic stresses strains of notched bodies subject to non-proportional loading paths is discussed. The method was based on the Neuber notch correction, which relates the incremental elastic and elastic-plastic strain energy densities at the notch root and the material constitutive relationship. The validity of the method was presented by comparing computed results of the proposed model against finite element numerical data of notched shaft. The comparison showed that the model estimated notch-root elasto-plastic stresses strains with good accuracy using linear-elastic stresses. The prosed model provides more efficient and simple analysis method preferable to expensive experimental component tests and more complex and time consuming incremental non-linear FE analysis. The model is particularly suitable to perform fatigue life and fatigue damage estimates of notched components subjected to non-proportional loading paths.Keywords: elasto-plastic, stress-strain, notch analysis, nonprortional loadings, cyclic plasticity, fatigue
Procedia PDF Downloads 46716622 Quantum Decision Making with Small Sample for Network Monitoring and Control
Authors: Tatsuya Otoshi, Masayuki Murata
Abstract:
With the development and diversification of applications on the Internet, applications that require high responsiveness, such as video streaming, are becoming mainstream. Application responsiveness is not only a matter of communication delay but also a matter of time required to grasp changes in network conditions. The tradeoff between accuracy and measurement time is a challenge in network control. We people make countless decisions all the time, and our decisions seem to resolve tradeoffs between time and accuracy. When making decisions, people are known to make appropriate choices based on relatively small samples. Although there have been various studies on models of human decision-making, a model that integrates various cognitive biases, called ”quantum decision-making,” has recently attracted much attention. However, the modeling of small samples has not been examined much so far. In this paper, we extend the model of quantum decision-making to model decision-making with a small sample. In the proposed model, the state is updated by value-based probability amplitude amplification. By analytically obtaining a lower bound on the number of samples required for decision-making, we show that decision-making with a small number of samples is feasible.Keywords: quantum decision making, small sample, MPEG-DASH, Grover's algorithm
Procedia PDF Downloads 8016621 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor
Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha
Abstract:
The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC
Procedia PDF Downloads 245