Search results for: teaching learning model
16231 A Model of Applied Psychology Research Defining Community Participation and Collective Identity as a Major Asset for Strategic Planning and Political Decision: The Project SIA (Social Inclusion through Accessibility)
Authors: Rui Serôdio, Alexandra Serra, José Albino Lima, Luísa Catita, Paula Lopes
Abstract:
We will present the outline of the Project SIA (Social Inclusion through Accessibility) focusing in one of its core components: how our applied research model contributes to define community participation as a pillar for strategic and political agenda amongst local authorities. Project ISA, supported by EU regional funding, was design as part of a broader model developed by SIMLab–Social Inclusion Monitoring Laboratory, in which the relation University-Community is a core element. The project illustrates how University of Porto developed a large scale project of applied psychology research in a close partnership with 18 municipalities that cover almost all regions of Portugal, and with a private architecture enterprise, specialized in inclusive accessibility and “design for all”. Three fundamental goals were defined: (1) creation of a model that would promote the effective civic participation of local citizens; (2) the “voice” of such participation should be both individual and collective; (3) the scientific and technical framework should serve as one of the bases for political decision on inclusive accessibility local planning. The two main studies were run in a standardized model across all municipalities and the samples of the three modalities of community participation were the following: individual participation based on 543 semi-structured interviews and 6373 inquiries; collective participation based on group session with 302 local citizens. We present some of the broader findings of Project SIA and discuss how they relate to our applied research model.Keywords: applied psychology, collective identity, community participation, inclusive accessibility
Procedia PDF Downloads 44716230 Multi-Atlas Segmentation Based on Dynamic Energy Model: Application to Brain MR Images
Authors: Jie Huo, Jonathan Wu
Abstract:
Segmentation of anatomical structures in medical images is essential for scientific inquiry into the complex relationships between biological structure and clinical diagnosis, treatment and assessment. As a method of incorporating the prior knowledge and the anatomical structure similarity between a target image and atlases, multi-atlas segmentation has been successfully applied in segmenting a variety of medical images, including the brain, cardiac, and abdominal images. The basic idea of multi-atlas segmentation is to transfer the labels in atlases to the coordinate of the target image by matching the target patch to the atlas patch in the neighborhood. However, this technique is limited by the pairwise registration between target image and atlases. In this paper, a novel multi-atlas segmentation approach is proposed by introducing a dynamic energy model. First, the target is mapped to each atlas image by minimizing the dynamic energy function, then the segmentation of target image is generated by weighted fusion based on the energy. The method is tested on MICCAI 2012 Multi-Atlas Labeling Challenge dataset which includes 20 target images and 15 atlases images. The paper also analyzes the influence of different parameters of the dynamic energy model on the segmentation accuracy and measures the dice coefficient by using different feature terms with the energy model. The highest mean dice coefficient obtained with the proposed method is 0.861, which is competitive compared with the recently published method.Keywords: brain MRI segmentation, dynamic energy model, multi-atlas segmentation, energy minimization
Procedia PDF Downloads 33616229 IOT Based Process Model for Heart Monitoring Process
Authors: Dalyah Y. Al-Jamal, Maryam H. Eshtaiwi, Liyakathunisa Syed
Abstract:
Connecting health services with technology has a huge demand as people health situations are becoming worse day by day. In fact, engaging new technologies such as Internet of Things (IOT) into the medical services can enhance the patient care services. Specifically, patients suffering from chronic diseases such as cardiac patients need a special care and monitoring. In reality, some efforts were previously taken to automate and improve the patient monitoring systems. However, the previous efforts have some limitations and lack the real-time feature needed for chronic kind of diseases. In this paper, an improved process model for patient monitoring system specialized for cardiac patients is presented. A survey was distributed and interviews were conducted to gather the needed requirements to improve the cardiac patient monitoring system. Business Process Model and Notation (BPMN) language was used to model the proposed process. In fact, the proposed system uses the IOT Technology to assist doctors to remotely monitor and follow-up with their heart patients in real-time. In order to validate the effectiveness of the proposed solution, simulation analysis was performed using Bizagi Modeler tool. Analysis results show performance improvements in the heart monitoring process. For the future, authors suggest enhancing the proposed system to cover all the chronic diseases.Keywords: IoT, process model, remote patient monitoring system, smart watch
Procedia PDF Downloads 33216228 Constrained RGBD SLAM with a Prior Knowledge of the Environment
Authors: Kathia Melbouci, Sylvie Naudet Collette, Vincent Gay-Bellile, Omar Ait-Aider, Michel Dhome
Abstract:
In this paper, we handle the problem of real time localization and mapping in indoor environment assisted by a partial prior 3D model, using an RGBD sensor. The proposed solution relies on a feature-based RGBD SLAM algorithm to localize the camera and update the 3D map of the scene. To improve the accuracy and the robustness of the localization, we propose to combine in a local bundle adjustment process, geometric information provided by a prior coarse 3D model of the scene (e.g. generated from the 2D floor plan of the building) along with RGBD data from a Kinect camera. The proposed approach is evaluated on a public benchmark dataset as well as on real scene acquired by a Kinect sensor.Keywords: SLAM, global localization, 3D sensor, bundle adjustment, 3D model
Procedia PDF Downloads 41416227 The Influence of Demographic on Tea Consumption in China
Authors: Xiguan Jiangfan Yang
Abstract:
This study investigates the tea consumption based on the Double-Hurdle model. The results of a CHNS survey of 12,745 samples in China offer two preliminary insights: First, we can’t apply the conclusions we get by using all samples to the men or women subgroups. Second, men and women are impacted by different demographic not only on the intention to drink tea, but also on the quantities of tea consumed. These two findings suggest that appropriate and corresponding marketing strategies should be developed to targeting on the different groups of tea consumers.Keywords: Chinese, CHNS, Double-Hurdle model, tea consumption
Procedia PDF Downloads 41116226 An Intelligent Prediction Method for Annular Pressure Driven by Mechanism and Data
Authors: Zhaopeng Zhu, Xianzhi Song, Gensheng Li, Shuo Zhu, Shiming Duan, Xuezhe Yao
Abstract:
Accurate calculation of wellbore pressure is of great significance to prevent wellbore risk during drilling. The traditional mechanism model needs a lot of iterative solving procedures in the calculation process, which reduces the calculation efficiency and is difficult to meet the demand of dynamic control of wellbore pressure. In recent years, many scholars have introduced artificial intelligence algorithms into wellbore pressure calculation, which significantly improves the calculation efficiency and accuracy of wellbore pressure. However, due to the ‘black box’ property of intelligent algorithm, the existing intelligent calculation model of wellbore pressure is difficult to play a role outside the scope of training data and overreacts to data noise, often resulting in abnormal calculation results. In this study, the multi-phase flow mechanism is embedded into the objective function of the neural network model as a constraint condition, and an intelligent prediction model of wellbore pressure under the constraint condition is established based on more than 400,000 sets of pressure measurement while drilling (MPD) data. The constraint of the multi-phase flow mechanism makes the prediction results of the neural network model more consistent with the distribution law of wellbore pressure, which overcomes the black-box attribute of the neural network model to some extent. The main performance is that the accuracy of the independent test data set is further improved, and the abnormal calculation values basically disappear. This method is a prediction method driven by MPD data and multi-phase flow mechanism, and it is the main way to predict wellbore pressure accurately and efficiently in the future.Keywords: multiphase flow mechanism, pressure while drilling data, wellbore pressure, mechanism constraints, combined drive
Procedia PDF Downloads 17416225 Dynamic Response and Damage Modeling of Glass Fiber Reinforced Epoxy Composite Pipes: Numerical Investigation
Authors: Ammar Maziz, Mostapha Tarfaoui, Said Rechak
Abstract:
The high mechanical performance of composite pipes can be adversely affected by their low resistance to impact loads. Loads in dynamic origin are dangerous and cause consequences on the operation of pipes because the damage is often not detected and can affect the structural integrity of composite pipes. In this work, an advanced 3-D finite element (FE) model, based on the use of intralaminar damage models was developed and used to predict damage under low-velocity impact. The performance of the numerical model is validated with the confrontation with the results of experimental tests. The results show that at low impact energy, the damage happens mainly by matrix cracking and delamination. The model capabilities to simulate the low-velocity impact events on the full-scale composite structures were proved.Keywords: composite materials, low velocity impact, FEA, dynamic behavior, progressive damage modeling
Procedia PDF Downloads 17216224 Software Quality Measurement System for Telecommunication Industry in Malaysia
Authors: Nor Fazlina Iryani Abdul Hamid, Mohamad Khatim Hasan
Abstract:
Evolution of software quality measurement has been started since McCall introduced his quality model in year 1977. Starting from there, several software quality models and software quality measurement methods had emerged but none of them focused on telecommunication industry. In this paper, the implementation of software quality measurement system for telecommunication industry was compulsory to accommodate the rapid growth of telecommunication industry. The quality value of the telecommunication related software could be calculated using this system by entering the required parameters. The system would calculate the quality value of the measured system based on predefined quality metrics and aggregated by referring to the quality model. It would classify the quality level of the software based on Net Satisfaction Index (NSI). Thus, software quality measurement system was important to both developers and users in order to produce high quality software product for telecommunication industry.Keywords: software quality, quality measurement, quality model, quality metric, net satisfaction index
Procedia PDF Downloads 59216223 Electro-Fenton Degradation of Erythrosine B Using Carbon Felt as a Cathode: Doehlert Design as an Optimization Technique
Authors: Sourour Chaabane, Davide Clematis, Marco Panizza
Abstract:
This study investigates the oxidation of Erythrosine B (EB) food dye by a homogeneous electro-Fenton process using iron (II) sulfate heptahydrate as a catalyst, carbon felt as cathode, and Ti/RuO2. The treated synthetic wastewater contains 100 mg L⁻¹ of EB and has a pH = 3. The effects of three independent variables have been considered for process optimization, such as applied current intensity (0.1 – 0.5 A), iron concentration (1 – 10 mM), and stirring rate (100 – 1000 rpm). Their interactions were investigated considering response surface methodology (RSM) based on Doehlert design as optimization method. EB removal efficiency and energy consumption were considered model responses after 30 minutes of electrolysis. Analysis of variance (ANOVA) revealed that the quadratic model was adequately fitted to the experimental data with R² (0.9819), adj-R² (0.9276) and low Fisher probability (< 0.0181) for EB removal model, and R² (0.9968), adj-R² (0.9872) and low Fisher probability (< 0.0014) relative to the energy consumption model reflected a robust statistical significance. The energy consumption model significantly depends on current density, as expected. The foregoing results obtained by RSM led to the following optimal conditions for EB degradation: current intensity of 0.2 A, iron concentration of 9.397 mM, and stirring rate of 500 rpm, which gave a maximum decolorization rate of 98.15 % with a minimum energy consumption of 0.74 kWh m⁻³ at 30 min of electrolysis.Keywords: electrofenton, erythrosineb, dye, response serface methdology, carbon felt
Procedia PDF Downloads 7216222 Formal Verification for Ethereum Smart Contract Using Coq
Authors: Xia Yang, Zheng Yang, Haiyong Sun, Yan Fang, Jingyu Liu, Jia Song
Abstract:
The smart contract in Ethereum is a unique program deployed on the Ethereum Virtual Machine (EVM) to help manage cryptocurrency. The security of this smart contract is critical to Ethereum’s operation and highly sensitive. In this paper, we present a formal model for smart contract, using the separated term-obligation (STO) strategy to formalize and verify the smart contract. We use the IBM smart sponsor contract (SSC) as an example to elaborate the detail of the formalizing process. We also propose a formal smart sponsor contract model (FSSCM) and verify SSC’s security properties with an interactive theorem prover Coq. We found the 'Unchecked-Send' vulnerability in the SSC, using our formal model and verification method. Finally, we demonstrate how we can formalize and verify other smart contracts with this approach, and our work indicates that this formal verification can effectively verify the correctness and security of smart contracts.Keywords: smart contract, formal verification, Ethereum, Coq
Procedia PDF Downloads 69116221 An Approach to Analyze Testing of Nano On-Chip Networks
Authors: Farnaz Fotovvatikhah, Javad Akbari
Abstract:
Test time of a test architecture is an important factor which depends on the architecture's delay and test patterns. Here a new architecture to store the test results based on network on chip is presented. In addition, simple analytical model is proposed to calculate link test time for built in self-tester (BIST) and external tester (Ext) in multiprocessor systems. The results extracted from the model are verified using FPGA implementation and experimental measurements. Systems consisting 16, 25, and 36 processors are implemented and simulated and test time is calculated. In addition, BIST and Ext are compared in terms of test time at different conditions such as at different number of test patterns and nodes. Using the model the maximum frequency of testing could be calculated and the test structure could be optimized for high speed testing.Keywords: test, nano on-chip network, JTAG, modelling
Procedia PDF Downloads 48816220 Synthesis of a Model Predictive Controller for Artificial Pancreas
Authors: Mohamed El Hachimi, Abdelhakim Ballouk, Ilyas Khelafa, Abdelaziz Mouhou
Abstract:
Introduction: Type 1 diabetes occurs when beta cells are destroyed by the body's own immune system. Treatment of type 1 diabetes mellitus could be greatly improved by applying a closed-loop control strategy to insulin delivery, also known as an Artificial Pancreas (AP). Method: In this paper, we present a new formulation of the cost function for a Model Predictive Control (MPC) utilizing a technic which accelerates the speed of control of the AP and tackles the nonlinearity of the control problem via asymmetric objective functions. Finding: The finding of this work consists in a new Model Predictive Control algorithm that leads to good performances like decreasing the time of hyperglycaemia and avoiding hypoglycaemia. Conclusion: These performances are validated under in silico trials.Keywords: artificial pancreas, control algorithm, biomedical control, MPC, objective function, nonlinearity
Procedia PDF Downloads 30716219 A Self-Coexistence Strategy for Spectrum Allocation Using Selfish and Unselfish Game Models in Cognitive Radio Networks
Authors: Noel Jeygar Robert, V. K.Vidya
Abstract:
Cognitive radio is a software-defined radio technology that allows cognitive users to operate on the vacant bands of spectrum allocated to licensed users. Cognitive radio plays a vital role in the efficient utilization of wireless radio spectrum available between cognitive users and licensed users without making any interference to licensed users. The spectrum allocation followed by spectrum sharing is done in a fashion where a cognitive user has to wait until spectrum holes are identified and allocated when the licensed user moves out of his own allocated spectrum. In this paper, we propose a self –coexistence strategy using bargaining and Cournot game model for achieving spectrum allocation in cognitive radio networks. The game-theoretic model analyses the behaviour of cognitive users in both cooperative and non-cooperative scenarios and provides an equilibrium level of spectrum allocation. Game-theoretic models such as bargaining game model and Cournot game model produce a balanced distribution of spectrum resources and energy consumption. Simulation results show that both game theories achieve better performance compared to other popular techniquesKeywords: cognitive radio, game theory, bargaining game, Cournot game
Procedia PDF Downloads 29916218 Modeling and Experimental Verification of Crystal Growth Kinetics in Glass Forming Alloys
Authors: Peter K. Galenko, Stefanie Koch, Markus Rettenmayr, Robert Wonneberger, Evgeny V. Kharanzhevskiy, Maria Zamoryanskaya, Vladimir Ankudinov
Abstract:
We analyze the structure of undercooled melts, crystal growth kinetics and amorphous/crystalline microstructure of rapidly solidifying glass-forming Pd-based and CuZr-based alloys. A dendrite growth model is developed using a combination of the kinetic phase-field model and mesoscopic sharp interface model. The model predicts features of crystallization kinetics in alloys from thermodynamically controlled growth (governed by the Gibbs free energy change on solidification) to the kinetically limited regime (governed by atomic attachment-detachment processes at the solid/liquid interface). Comparing critical undercoolings observed in the crystallization kinetics with experimental data on melt viscosity, atomistic simulation's data on liquid microstructure and theoretically predicted dendrite growth velocity allows us to conclude that the dendrite growth kinetics strongly depends on the cluster structure changes of the melt. The obtained data of theoretical and experimental investigations are used for interpretation of microstructure of samples processed in electro-magnetic levitator on board International Space Station in the frame of the project "MULTIPHAS" (European Space Agency and German Aerospace Center, 50WM1941) and "KINETIKA" (ROSKOSMOS).Keywords: dendrite, kinetics, model, solidification
Procedia PDF Downloads 12016217 Mathematical Modeling Pressure Losses of Trapezoidal Labyrinth Channel and Bi-Objective Optimization of the Design Parameters
Authors: Nina Philipova
Abstract:
The influence of the geometric parameters of trapezoidal labyrinth channel on the pressure losses along the labyrinth length is investigated in this work. The impact of the dentate height is studied at fixed values of the dentate angle and the dentate spacing. The objective of the work presented in this paper is to derive a mathematical model of the pressure losses along the labyrinth length depending on the dentate height. The numerical simulations of the water flow movement are performed by using Commercial codes ANSYS GAMBIT and FLUENT. Dripper inlet pressure is set up to be 1 bar. As a result, the mathematical model of the pressure losses is determined as a second-order polynomial by means Commercial code STATISTIKA. Bi-objective optimization is performed by using the mean algebraic function of utility. The optimum value of the dentate height is defined at fixed values of the dentate angle and the dentate spacing. The derived model of the pressure losses and the optimum value of the dentate height are used as a basis for a more successful emitter design.Keywords: drip irrigation, labyrinth channel hydrodynamics, numerical simulations, Reynolds stress model
Procedia PDF Downloads 15416216 Earnings vs Cash Flows: The Valuation Perspective
Authors: Megha Agarwal
Abstract:
The research paper is an effort to compare the earnings based and cash flow based methods of valuation of an enterprise. The theoretically equivalent methods based on either earnings such as Residual Earnings Model (REM), Abnormal Earnings Growth Model (AEGM), Residual Operating Income Method (ReOIM), Abnormal Operating Income Growth Model (AOIGM) and its extensions multipliers such as price/earnings ratio, price/book value ratio; or cash flow based models such as Dividend Valuation Method (DVM) and Free Cash Flow Method (FCFM) all provide different estimates of valuation of the Indian giant corporate Reliance India Limited (RIL). An ex-post analysis of published accounting and financial data for four financial years from 2008-09 to 2011-12 has been conducted. A comparison of these valuation estimates with the actual market capitalization of the company shows that the complex accounting based model AOIGM provides closest forecasts. These different estimates may be derived due to inconsistencies in discount rate, growth rates and the other forecasted variables. Although inputs for earnings based models may be available to the investor and analysts through published statements, precise estimation of free cash flows may be better undertaken by the internal management. The estimation of value from more stable parameters as residual operating income and RNOA could be considered superior to the valuations from more volatile return on equity.Keywords: earnings, cash flows, valuation, Residual Earnings Model (REM)
Procedia PDF Downloads 37616215 Exploring the Spatial Relationship between Built Environment and Ride-hailing Demand: Applying Street-Level Images
Authors: Jingjue Bao, Ye Li, Yujie Qi
Abstract:
The explosive growth of ride-hailing has reshaped residents' travel behavior and plays a crucial role in urban mobility within the built environment. Contributing to the research of the spatial variation of ride-hailing demand and its relationship to the built environment and socioeconomic factors, this study utilizes multi-source data from Haikou, China, to construct a Multi-scale Geographically Weighted Regression model (MGWR), considering spatial scale heterogeneity. The regression results showed that MGWR model was demonstrated superior interpretability and reliability with an improvement of 3.4% on R2 and from 4853 to 4787 on AIC, compared with Geographically Weighted Regression model (GWR). Furthermore, to precisely identify the surrounding environment of sampling point, DeepLabv3+ model is employed to segment street-level images. Features extracted from these images are incorporated as variables in the regression model, further enhancing its rationality and accuracy by 7.78% improvement on R2 compared with the MGWR model only considered region-level variables. By integrating multi-scale geospatial data and utilizing advanced computer vision techniques, this study provides a comprehensive understanding of the spatial dynamics between ride-hailing demand and the urban built environment. The insights gained from this research are expected to contribute significantly to urban transportation planning and policy making, as well as ride-hailing platforms, facilitating the development of more efficient and effective mobility solutions in modern cities.Keywords: travel behavior, ride-hailing, spatial relationship, built environment, street-level image
Procedia PDF Downloads 8116214 Physical Interaction Mappings: Utilizing Cognitive Load Theory in Order to Enhance Physical Product Interaction
Authors: Bryan Young, Andrew Wodehouse, Marion Sheridan
Abstract:
The availability of working memory has long been identified as a critical aspect of an instructional design. Many conventional instructional procedures impose irrelevant or unrelated cognitive loads on the learner due to the fact that they were created without contemplation, or understanding, of cognitive work load. Learning to physically operate traditional products can be viewed as a learning process akin to any other. As such, many of today's products, such as cars, boats, and planes, which have traditional controls that predate modern user-centered design techniques may be imposing irrelevant or unrelated cognitive loads on their operators. The goal of the research was to investigate the fundamental relationships between physical inputs, resulting actions, and learnability. The results showed that individuals can quickly adapt to input/output reversals across dimensions, however, individuals struggle to cope with the input/output when the dimensions are rotated due to the resulting increase in cognitive load.Keywords: cognitive load theory, instructional design, physical product interactions, usability design
Procedia PDF Downloads 53716213 Generation of Knowlege with Self-Learning Methods for Ophthalmic Data
Authors: Klaus Peter Scherer, Daniel Knöll, Constantin Rieder
Abstract:
Problem and Purpose: Intelligent systems are available and helpful to support the human being decision process, especially when complex surgical eye interventions are necessary and must be performed. Normally, such a decision support system consists of a knowledge-based module, which is responsible for the real assistance power, given by an explanation and logical reasoning processes. The interview based acquisition and generation of the complex knowledge itself is very crucial, because there are different correlations between the complex parameters. So, in this project (semi)automated self-learning methods are researched and developed for an enhancement of the quality of such a decision support system. Methods: For ophthalmic data sets of real patients in a hospital, advanced data mining procedures seem to be very helpful. Especially subgroup analysis methods are developed, extended and used to analyze and find out the correlations and conditional dependencies between the structured patient data. After finding causal dependencies, a ranking must be performed for the generation of rule-based representations. For this, anonymous patient data are transformed into a special machine language format. The imported data are used as input for algorithms of conditioned probability methods to calculate the parameter distributions concerning a special given goal parameter. Results: In the field of knowledge discovery advanced methods and applications could be performed to produce operation and patient related correlations. So, new knowledge was generated by finding causal relations between the operational equipment, the medical instances and patient specific history by a dependency ranking process. After transformation in association rules logically based representations were available for the clinical experts to evaluate the new knowledge. The structured data sets take account of about 80 parameters as special characteristic features per patient. For different extended patient groups (100, 300, 500), as well one target value as well multi-target values were set for the subgroup analysis. So the newly generated hypotheses could be interpreted regarding the dependency or independency of patient number. Conclusions: The aim and the advantage of such a semi-automatically self-learning process are the extensions of the knowledge base by finding new parameter correlations. The discovered knowledge is transformed into association rules and serves as rule-based representation of the knowledge in the knowledge base. Even more, than one goal parameter of interest can be considered by the semi-automated learning process. With ranking procedures, the most strong premises and also conjunctive associated conditions can be found to conclude the interested goal parameter. So the knowledge, hidden in structured tables or lists can be extracted as rule-based representation. This is a real assistance power for the communication with the clinical experts.Keywords: an expert system, knowledge-based support, ophthalmic decision support, self-learning methods
Procedia PDF Downloads 25316212 Magneto-Rheological Damper Based Semi-Active Robust H∞ Control of Civil Structures with Parametric Uncertainties
Authors: Vedat Senol, Gursoy Turan, Anders Helmersson, Vortechz Andersson
Abstract:
In developing a mathematical model of a real structure, the simulation results of the model may not match the real structural response. This is a general problem that arises during dynamic motion of the structure, which may be modeled by means of parameter variations in the stiffness, damping, and mass matrices. These changes in parameters need to be estimated, and the mathematical model is updated to obtain higher control performances and robustness. In this study, a linear fractional transformation (LFT) is utilized for uncertainty modeling. Further, a general approach to the design of an H∞ control of a magneto-rheological damper (MRD) for vibration reduction in a building with mass, damping, and stiffness uncertainties is presented.Keywords: uncertainty modeling, structural control, MR Damper, H∞, robust control
Procedia PDF Downloads 13816211 The Improvement of Environmental Protection through Motor Vehicle Noise Abatement
Authors: Z. Jovanovic, Z. Masonicic, S. Dragutinovic, Z. Sakota
Abstract:
In this paper, a methodology for noise reduction of motor vehicles in use is presented. The methodology relies on synergic model of noise generation as a function of time. The arbitrary number of motor vehicle noise sources act in concert yielding the generation of the overall noise level of motor vehicle thereafter. The number of noise sources participating in the overall noise level of motor vehicle is subjected to the constraint of the calculation of the acoustic potential of each noise source under consideration. It is the prerequisite condition for the calculation of the acoustic potential of the whole vehicle. The recast form of pertinent set of equations describing the synergic model is laid down and solved by dint of Gauss method. The bunch of results emerged and some of them i.e. those ensuing from model application to MDD FAP Priboj motor vehicle in use are particularly elucidated.Keywords: noise abatement, MV noise sources, noise source identification, muffler
Procedia PDF Downloads 44516210 Image Classification with Localization Using Convolutional Neural Networks
Authors: Bhuyain Mobarok Hossain
Abstract:
Image classification and localization research is currently an important strategy in the field of computer vision. The evolution and advancement of deep learning and convolutional neural networks (CNN) have greatly improved the capabilities of object detection and image-based classification. Target detection is important to research in the field of computer vision, especially in video surveillance systems. To solve this problem, we will be applying a convolutional neural network of multiple scales at multiple locations in the image in one sliding window. Most translation networks move away from the bounding box around the area of interest. In contrast to this architecture, we consider the problem to be a classification problem where each pixel of the image is a separate section. Image classification is the method of predicting an individual category or specifying by a shoal of data points. Image classification is a part of the classification problem, including any labels throughout the image. The image can be classified as a day or night shot. Or, likewise, images of cars and motorbikes will be automatically placed in their collection. The deep learning of image classification generally includes convolutional layers; the invention of it is referred to as a convolutional neural network (CNN).Keywords: image classification, object detection, localization, particle filter
Procedia PDF Downloads 30516209 Development of Medical Intelligent Process Model Using Ontology Based Technique
Authors: Emmanuel Chibuogu Asogwa, Tochukwu Sunday Belonwu
Abstract:
An urgent demand for creative solutions has been created by the rapid expansion of medical knowledge, the complexity of patient care, and the requirement for more precise decision-making. As a solution to this problem, the creation of a Medical Intelligent Process Model (MIPM) utilizing ontology-based appears as a promising way to overcome this obstacle and unleash the full potential of healthcare systems. The development of a Medical Intelligent Process Model (MIPM) using ontology-based techniques is motivated by a lack of quick access to relevant medical information and advanced tools for treatment planning and clinical decision-making, which ontology-based techniques can provide. The aim of this work is to develop a structured and knowledge-driven framework that leverages ontology, a formal representation of domain knowledge, to enhance various aspects of healthcare. Object-Oriented Analysis and Design Methodology (OOADM) were adopted in the design of the system as we desired to build a usable and evolvable application. For effective implementation of this work, we used the following materials/methods/tools: the medical dataset for the test of our model in this work was obtained from Kaggle. The ontology-based technique was used with Confusion Matrix, MySQL, Python, Hypertext Markup Language (HTML), Hypertext Preprocessor (PHP), Cascaded Style Sheet (CSS), JavaScript, Dreamweaver, and Fireworks. According to test results on the new system using Confusion Matrix, both the accuracy and overall effectiveness of the medical intelligent process significantly improved by 20% compared to the previous system. Therefore, using the model is recommended for healthcare professionals.Keywords: ontology-based, model, database, OOADM, healthcare
Procedia PDF Downloads 7816208 Using TRACE and SNAP Codes to Establish the Model of Maanshan PWR for SBO Accident
Authors: B. R. Shen, J. R. Wang, J. H. Yang, S. W. Chen, C. Shih, Y. Chiang, Y. F. Chang, Y. H. Huang
Abstract:
In this research, TRACE code with the interface code-SNAP was used to simulate and analyze the SBO (station blackout) accident which occurred in Maanshan PWR (pressurized water reactor) nuclear power plant (NPP). There are four main steps in this research. First, the SBO accident data of Maanshan NPP were collected. Second, the TRACE/SNAP model of Maanshan NPP was established by using these data. Third, this TRACE/SNAP model was used to perform the simulation and analysis of SBO accident. Finally, the simulation and analysis of SBO with mitigation equipments was performed. The analysis results of TRACE are consistent with the data of Maanshan NPP. The mitigation equipments of Maanshan can maintain the safety of Maanshan in the SBO according to the TRACE predictions.Keywords: pressurized water reactor (PWR), TRACE, station blackout (SBO), Maanshan
Procedia PDF Downloads 19416207 Multi-Label Approach to Facilitate Test Automation Based on Historical Data
Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally
Abstract:
The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.Keywords: machine learning, multi-class, multi-label, supervised learning, test automation
Procedia PDF Downloads 13216206 Modern State of the Universal Modeling for Centrifugal Compressors
Authors: Y. Galerkin, K. Soldatova, A. Drozdov
Abstract:
The 6th version of Universal modeling method for centrifugal compressor stage calculation is described. Identification of the new mathematical model was made. As a result of identification the uniform set of empirical coefficients is received. The efficiency definition error is 0,86 % at a design point. The efficiency definition error at five flow rate points (except a point of the maximum flow rate) is 1,22 %. Several variants of the stage with 3D impellers designed by 6th version program and quasi three-dimensional calculation programs were compared by their gas dynamic performances CFD (NUMECA FINE TURBO). Performance comparison demonstrated general principles of design validity and leads to some design recommendations.Keywords: compressor design, loss model, performance prediction, test data, model stages, flow rate coefficient, work coefficient
Procedia PDF Downloads 41216205 Study of Nitrogen Species Fate and Transport in Subsurface: To Assess the Impact of Wastewater Irrigation
Authors: C. Mekala, Indumathi M. Nambi
Abstract:
Nitrogen pollution in groundwater arising from wastewater and fertilizer application through vadose zone is a major problem and it causes a prime risk to groundwater based drinking water supplies. Nitrogenous compounds namely ammonium, nitrate and nitrite fate and transport in soil subsurface were studied experimentally. The major process like sorption, leaching, biotransformation involving microbial growth kinetics, and biological clogging due to biomass growth were assessed and modeled with advection-dispersion reaction equations for ammonium, nitrate and acetate in a saturated, heterogeneous soil medium. The transport process was coupled with freundlich sorption and monod inhibition kinetics for immobile bacteria and permeability reduction due to biomass growth will be verified and validated with the numerical model. This proposed mathematical model will be very helpful in the development of a management model for a sustainable and safe wastewater reuse strategies such as irrigation and groundwater recharge.Keywords: nitrogen species transport, transformation, biological clogging, biokinetic parameters, contaminant transport model, saturated soil
Procedia PDF Downloads 40016204 Comparative Study of Experimental and Theoretical Convective, Evaporative for Two Model Distiller
Authors: Khaoula Hidouri, Ali Benhmidene, Bechir Chouachi
Abstract:
The purification of brackish seawater becomes a necessity and not a choice against demographic and industrial growth especially in third world countries. Two models can be used in this work: simple solar still and simple solar still coupled with a heat pump. In this research, the productivity of water by Simple Solar Distiller (SSD) and Simple Solar Distiller Hybrid Heat Pump (SSDHP) was determined by the orientation, the use of heat pump, the simple or double glass cover. The productivity can exceed 1.2 L/m²h for the SSDHP and 0.5 L/m²h for SSD model. The result of the global efficiency is determined for two models SSD and SSDHP give respectively 30%, 50%. The internal efficiency attained 35% for SSD and 60% of the SSDHP models. Convective heat coefficient can be determined by attained 2.5 W/m²°C and 0.5 W/m²°C respectively for SSDHP and SSD models.Keywords: productivity, efficiency, convective heat coefficient, SSD model, SSDHPmodel
Procedia PDF Downloads 21316203 A Study on the Performance of 2-PC-D Classification Model
Authors: Nurul Aini Abdul Wahab, Nor Syamim Halidin, Sayidatina Aisah Masnan, Nur Izzati Romli
Abstract:
There are many applications of principle component method for reducing the large set of variables in various fields. Fisher’s Discriminant function is also a popular tool for classification. In this research, the researcher focuses on studying the performance of Principle Component-Fisher’s Discriminant function in helping to classify rice kernels to their defined classes. The data were collected on the smells or odour of the rice kernel using odour-detection sensor, Cyranose. 32 variables were captured by this electronic nose (e-nose). The objective of this research is to measure how well a combination model, between principle component and linear discriminant, to be as a classification model. Principle component method was used to reduce all 32 variables to a smaller and manageable set of components. Then, the reduced components were used to develop the Fisher’s Discriminant function. In this research, there are 4 defined classes of rice kernel which are Aromatic, Brown, Ordinary and Others. Based on the output from principle component method, the 32 variables were reduced to only 2 components. Based on the output of classification table from the discriminant analysis, 40.76% from the total observations were correctly classified into their classes by the PC-Discriminant function. Indirectly, it gives an idea that the classification model developed has committed to more than 50% of misclassifying the observations. As a conclusion, the Fisher’s Discriminant function that was built on a 2-component from PCA (2-PC-D) is not satisfying to classify the rice kernels into its defined classes.Keywords: classification model, discriminant function, principle component analysis, variable reduction
Procedia PDF Downloads 33316202 Statistical Analysis of the Impact of Maritime Transport Gross Domestic Product (GDP) on Nigeria’s Economy
Authors: Kehinde Peter Oyeduntan, Kayode Oshinubi
Abstract:
Nigeria is referred as the ‘Giant of Africa’ due to high population, land mass and large economy. However, it still trails far behind many smaller economies in the continent in terms of maritime operations. As we have seen that the maritime industry is the spark plug for national growth, because it houses the most crucial infrastructure that generates wealth for a nation, it is worrisome that a nation with six seaports lag in maritime activities. In this research, we have studied how the Gross Domestic Product (GDP) of the maritime transport influences the Nigerian economy. To do this, we applied Simple Linear Regression (SLR), Support Vector Machine (SVM), Polynomial Regression Model (PRM), Generalized Additive Model (GAM) and Generalized Linear Mixed Model (GLMM) to model the relationship between the nation’s Total GDP (TGDP) and the Maritime Transport GDP (MGDP) using a time series data of 20 years. The result showed that the MGDP is statistically significant to the Nigerian economy. Amongst the statistical tool applied, the PRM of order 4 describes the relationship better when compared to other methods. The recommendations presented in this study will guide policy makers and help improve the economy of Nigeria in terms of its GDP.Keywords: maritime transport, economy, GDP, regression, port
Procedia PDF Downloads 154