Search results for: generalized additive model
15494 Effects of Fourth Alloying Additive on Microstructure and Mechanical Properties of Sn-Ag-Cu Alloy
Authors: Ugur Buyuk, Sevda Engin
Abstract:
Among the various alloy systems being considered as lead-free solder candidates, Sn-Ag-Cu alloys have been recognized as the most promising because of their excellent reliability and compatibility with current components. Thus, Sn-Ag-Cu alloys have recently attracted considerable attention and have been proposed by the Japanese, the EU and the US consortiums to replace conventional Sn-Pb eutectic solder. However, many problems or unknown characteristics of the Sn-Ag-Cu alloy system such as the best composition, the large undercooling in solidification, and the formation of large intermetallics still exist. It is expected that the addition of some solidification nuclei for Sn-Ag-Cu alloys will refine the solidification microstructure and will suppress undercooling.In the present work, the effects of the fourth elements, i.e., Zn, Ni, Bi, In and Co, on microstructural and mechanical properties of Sn-3.5Ag-0.9Cu lead-free solder were investigated. Sn-3.5Ag-0.9Cu-0.5X (X= Zn, Ni, Bi, In, Co (wt.)) alloys were prepared in a graphite crucible under vacuum atmosphere. The samples were directionally solidified upward at a constant temperature gradient and growth rates by using a Bridgman type directional solidification furnace. The microstructure, microhardness and ultimate tensile strength of alloys were measured. The effects of fourth elements on the microstructure and mechanical properties of Sn–Ag-Cu eutectic alloys were investigated. The results obtained in the present work were compared with the previous experimental results.Keywords: lead-free solders, microhardness, microstructure, tensile strength
Procedia PDF Downloads 41315493 An Experimental Study on Some Conventional and Hybrid Models of Fuzzy Clustering
Authors: Jeugert Kujtila, Kristi Hoxhalli, Ramazan Dalipi, Erjon Cota, Ardit Murati, Erind Bedalli
Abstract:
Clustering is a versatile instrument in the analysis of collections of data providing insights of the underlying structures of the dataset and enhancing the modeling capabilities. The fuzzy approach to the clustering problem increases the flexibility involving the concept of partial memberships (some value in the continuous interval [0, 1]) of the instances in the clusters. Several fuzzy clustering algorithms have been devised like FCM, Gustafson-Kessel, Gath-Geva, kernel-based FCM, PCM etc. Each of these algorithms has its own advantages and drawbacks, so none of these algorithms would be able to perform superiorly in all datasets. In this paper we will experimentally compare FCM, GK, GG algorithm and a hybrid two-stage fuzzy clustering model combining the FCM and Gath-Geva algorithms. Firstly we will theoretically dis-cuss the advantages and drawbacks for each of these algorithms and we will describe the hybrid clustering model exploiting the advantages and diminishing the drawbacks of each algorithm. Secondly we will experimentally compare the accuracy of the hybrid model by applying it on several benchmark and synthetic datasets.Keywords: fuzzy clustering, fuzzy c-means algorithm (FCM), Gustafson-Kessel algorithm, hybrid clustering model
Procedia PDF Downloads 51415492 The Development of Directed-Project Based Learning as Language Learning Model to Improve Students' English Achievement
Authors: Tri Pratiwi, Sufyarma Marsidin, Hermawati Syarif, Yahya
Abstract:
The 21st-century skills being highly promoted today are Creativity and Innovation, Critical Thinking and Problem Solving, Communication and Collaboration. Communication Skill is one of the essential skills that should be mastered by the students. To master Communication Skills, students must first master their Language Skills. Language Skills is one of the main supporting factors in improving Communication Skills of a person because by learning Language Skills students are considered capable of communicating well and correctly so that the message or how to deliver the message to the listener can be conveyed clearly and easily understood. However, it cannot be denied that English output or learning outcomes which are less optimal is the problem which is frequently found in the implementation of the learning process. This research aimed to improve students’ language skills by developing learning model in English subject for VIII graders of SMP N 1 Uram Jaya through Directed-Project Based Learning (DPjBL) implementation. This study is designed in Research and Development (R & D) using ADDIE model development. The researcher collected data through observation, questionnaire, interview, test, and documentation which were then analyzed qualitatively and quantitatively. The results showed that DPjBL is effective to use, it is seen from the difference in value between the pretest and posttest of the control class and the experimental class. From the results of a questionnaire filled in general, the students and teachers agreed to DPjBL learning model. This learning model can increase the students' English achievement.Keywords: language skills, learning model, Directed-Project Based Learning (DPjBL), English achievement
Procedia PDF Downloads 16515491 Gas Pressure Evaluation through Radial Velocity Measurement of Fluid Flow Modeled by Drift Flux Model
Authors: Aicha Rima Cheniti, Hatem Besbes, Joseph Haggege, Christophe Sintes
Abstract:
In this paper, we consider a drift flux mixture model of the blood flow. The mixture consists of gas phase which is carbon dioxide and liquid phase which is an aqueous carbon dioxide solution. This model was used to determine the distributions of the mixture velocity, the mixture pressure, and the carbon dioxide pressure. These theoretical data are used to determine a measurement method of mean gas pressure through the determination of radial velocity distribution. This method can be applicable in experimental domain.Keywords: mean carbon dioxide pressure, mean mixture pressure, mixture velocity, radial velocity
Procedia PDF Downloads 32415490 Environmental Performance of Different Lab Scale Chromium Removal Processes
Authors: Chiao-Cheng Huang, Pei-Te Chiueh, Ya-Hsuan Liou
Abstract:
Chromium-contaminated wastewater from electroplating industrial activity has been a long-standing environmental issue, as it can degrade surface water quality and is harmful to soil ecosystems. The traditional method of treating chromium-contaminated wastewater has been to use chemical coagulation processes. However, this method consumes large amounts of chemicals such as sulfuric acid, sodium hydroxide, and sodium bicarbonate in order to remove chromium. However, a series of new methods for treating chromium-containing wastewater have been developed. This study aimed to compare the environmental impact of four different lab scale chromium removal processes: 1.) chemical coagulation process (the most common and traditional method), in which sodium metabisulfite was used as reductant, 2.) electrochemical process using two steel sheets as electrodes, 3.) reduction by iron-copper bimetallic powder, and 4.) photocatalysis process by TiO2. Each process was run in the lab, and was able to achieve 100% removal of chromium in solution. Then a Life Cycle Assessment (LCA) study was conducted based on the experimental data obtained from four different case studies to identify the environmentally preferable alternative to treat chromium wastewater. The model used for calculating the environmental impact was TRACi, and the system scope includes the production phase and use phase of chemicals and electricity consumed by the chromium removal processes, as well as the final disposal of chromium containing sludge. The functional unit chosen in this study was the removal of 1 mg of chromium. Solution volume of each case study was adjusted to 1 L in advance and the chemicals and energy consumed were proportionally adjusted. The emissions and resources consumed were identified and characterized into 15 categories of midpoint impacts. The impact assessment results show that the human ecotoxicity category accounts for 55 % of environmental impact in Case 1, which can be attributed to the sulfuric acid used for pH adjustment. In Case 2, production of steel sheet electrodes is an energy-intensive process, thus contributed to 20 % of environmental impact. In Case 3, sodium bicarbonate is used as an anti-corrosion additive, which results mainly in 1.02E-05 Comparative Toxicity Unit (CTU) in the human toxicity category and 0.54E-05 (CTU) in acidification of air. In Case 4, electricity consumption for power supply of UV lamp gives 5.25E-05 (CTU) in human toxicity category, 1.15E-05 (kg Neq) in eutrophication. In conclusion, Case 3 and Case 4 have higher environmental impacts than Case 1 and Case 2, which can be attributed mostly to higher energy and chemical consumption, leading to high impacts in the global warming and ecotoxicity categories.Keywords: chromium, lab scale, life cycle assessment, wastewater
Procedia PDF Downloads 26515489 A Predictive Model of Supply and Demand in the State of Jalisco, Mexico
Authors: M. Gil, R. Montalvo
Abstract:
Business Intelligence (BI) has become a major source of competitive advantages for firms around the world. BI has been defined as the process of data visualization and reporting for understanding what happened and what is happening. Moreover, BI has been studied for its predictive capabilities in the context of trade and financial transactions. The current literature has identified that BI permits managers to identify market trends, understand customer relations, and predict demand for their products and services. This last capability of BI has been of special concern to academics. Specifically, due to its power to build predictive models adaptable to specific time horizons and geographical regions. However, the current literature of BI focuses on predicting specific markets and industries because the impact of such predictive models was relevant to specific industries or organizations. Currently, the existing literature has not developed a predictive model of BI that takes into consideration the whole economy of a geographical area. This paper seeks to create a predictive model of BI that would show the bigger picture of a geographical area. This paper uses a data set from the Secretary of Economic Development of the state of Jalisco, Mexico. Such data set includes data from all the commercial transactions that occurred in the state in the last years. By analyzing such data set, it will be possible to generate a BI model that predicts supply and demand from specific industries around the state of Jalisco. This research has at least three contributions. Firstly, a methodological contribution to the BI literature by generating the predictive supply and demand model. Secondly, a theoretical contribution to BI current understanding. The model presented in this paper incorporates the whole picture of the economic field instead of focusing on a specific industry. Lastly, a practical contribution might be relevant to local governments that seek to improve their economic performance by implementing BI in their policy planning.Keywords: business intelligence, predictive model, supply and demand, Mexico
Procedia PDF Downloads 12315488 Market Integration in the ECCAS Sub-Region
Authors: Mouhamed Mbouandi Njikam
Abstract:
This work assesses the trade potential of countries in the Economic Community of Central Africa States (ECCAS). The gravity model of trade is used to evaluate the trade flows of member countries, and to compute the trade potential index of ECCAS during 1995-2010. The focus is on the removal of tariffs and non-tariff barriers in the sub-region. Estimates from the gravity model are used for the calculation of the sub-region’s commercial potential. Its three main findings are: (i) the background research shows a low level of integration in the sub-region and open economies; (ii) a low level of industrialization and diversification are the main factors reducing trade potential in the sub-region; (iii) the trade creation predominate on the deflections of trade between member countries.Keywords: gravity model, ECCAS, trade flows, trade potential, regional cooperation
Procedia PDF Downloads 42615487 Development of Time Series Forecasting Model for Dengue Cases in Nakhon Si Thammarat, Southern Thailand
Authors: Manit Pollar
Abstract:
Identifying the dengue epidemic periods early would be helpful to take necessary actions to prevent the dengue outbreaks. Providing an accurate prediction on dengue epidemic seasons will allow sufficient time to take the necessary decisions and actions to safeguard the situation for local authorities. This study aimed to develop a forecasting model on number of dengue incidences in Nakhon Si Thammarat Province, Southern Thailand using time series analysis. We develop Seasonal Autoregressive Moving Average (SARIMA) models on the monthly data collected between 2003-2011 and validated the models using data collected between January-September 2012. The result of this study revealed that the SARIMA(1,1,0)(1,2,1)12 model closely described the trends and seasons of dengue incidence and confirmed the existence of dengue fever cases in Nakhon Si Thammarat for the years between 2003-2011. The study showed that the one-step approach for predicting dengue incidences provided significantly more accurate predictions than the twelve-step approach. The model, even if based purely on statistical data analysis, can provide a useful basis for allocation of resources for disease prevention.Keywords: SARIMA, time series model, dengue cases, Thailand
Procedia PDF Downloads 35815486 Structural Analysis and Detail Design of APV Module Structure Using Topology Optimization Design
Authors: Hyun Kyu Cho, Jun Soo Kim, Young Hoon Lee, Sang Hoon Kang, Young Chul Park
Abstract:
In the study, structure for one of offshore drilling system APV(Air Pressure Vessle) modules was designed by using topology optimum design and performed structural safety evaluation according to DNV rules. 3D model created base on design area and non-design area separated by using topology optimization for the environmental loads. This model separated 17 types for wind loads and dynamic loads and performed structural analysis evaluation for each model. As a result, the maximum stress occurred 181.25MPa.Keywords: APV, topology optimum design, DNV, structural analysis, stress
Procedia PDF Downloads 42615485 Developing Integrated Model for Building Design and Evacuation Planning
Authors: Hao-Hsi Tseng, Hsin-Yun Lee
Abstract:
In the process of building design, the designers have to complete the spatial design and consider the evacuation performance at the same time. It is usually difficult to combine the two planning processes and it results in the gap between spatial design and evacuation performance. Then the designers cannot complete an integrated optimal design solution. In addition, the evacuation routing models proposed by previous researchers is different from the practical evacuation decisions in the real field. On the other hand, more and more building design projects are executed by Building Information Modeling (BIM) in which the design content is formed by the object-oriented framework. Thus, the integration of BIM and evacuation simulation can make a significant contribution for designers. Therefore, this research plan will establish a model that integrates spatial design and evacuation planning. The proposed model will provide the support for the spatial design modifications and optimize the evacuation planning. The designers can complete the integrated design solution in BIM. Besides, this research plan improves the evacuation routing method to make the simulation results more practical. The proposed model will be applied in a building design project for evaluation and validation when it will provide the near-optimal design suggestion. By applying the proposed model, the integration and efficiency of the design process are improved and the evacuation plan is more useful. The quality of building spatial design will be better.Keywords: building information modeling, evacuation, design, floor plan
Procedia PDF Downloads 45615484 An Optimization Model for Waste Management in Demolition Works
Authors: Eva Queheille, Franck Taillandier, Nadia Saiyouri
Abstract:
Waste management has become a major issue in demolition works, because of its environmental impact (energy consumption, resource consumption, pollution…). However, improving waste management requires to take also into account the overall demolition process and to consider demolition main objectives (e.g. cost, delay). Establishing a strategy with these conflicting objectives (economic and environment) remains complex. In order to provide a decision-support for demolition companies, a multi-objective optimization model was developed. In this model, a demolition strategy is computed from a set of 80 decision variables (worker team composition, machines, treatment for each type of waste, choice of treatment platform…), which impacts the demolition objectives. The model has experimented on a real-case study (demolition of several buildings in France). To process the optimization, different optimization algorithms (NSGA2, MOPSO, DBEA…) were tested. Results allow the engineer in charge of this case, to build a sustainable demolition strategy without affecting cost or delay.Keywords: deconstruction, life cycle assessment, multi-objective optimization, waste management
Procedia PDF Downloads 15215483 Application of Public Access Two-Dimensional Hydrodynamic and Distributed Hydrological Models for Flood Forecasting in Ungauged Basins
Authors: Ahmad Shayeq Azizi, Yuji Toda
Abstract:
In Afghanistan, floods are the most frequent and recurrent events among other natural disasters. On the other hand, lack of monitoring data is a severe problem, which increases the difficulty of making the appropriate flood countermeasures of flood forecasting. This study is carried out to simulate the flood inundation in Harirud River Basin by application of distributed hydrological model, Integrated Flood Analysis System (IFAS) and 2D hydrodynamic model, International River Interface Cooperative (iRIC) based on satellite rainfall combined with historical peak discharge and global accessed data. The results of the simulation can predict the inundation area, depth and velocity, and the hardware countermeasures such as the impact of levee installation can be discussed by using the present method. The methodology proposed in this study is suitable for the area where hydrological and geographical data including river survey data are poorly observed.Keywords: distributed hydrological model, flood inundation, hydrodynamic model, ungauged basins
Procedia PDF Downloads 16615482 Numerical Modeling of Flow in USBR II Stilling Basin with End Adverse Slope
Authors: Hamidreza Babaali, Alireza Mojtahedi, Nasim Soori, Saba Soori
Abstract:
Hydraulic jump is one of the effective ways of energy dissipation in stilling basins that the energy is highly dissipated by jumping. Adverse slope surface at the end stilling basin is caused to increase energy dissipation and stability of the hydraulic jump. In this study, the adverse slope has been added to end of United States Bureau of Reclamation (USBR) II stilling basin in hydraulic model of Nazloochay dam with scale 1:40, and flow simulated into stilling basin using Flow-3D software. The numerical model is verified by experimental data of water depth in stilling basin. Then, the parameters of water level profile, Froude Number, pressure, air entrainment and turbulent dissipation investigated for discharging 300 m3/s using K-Ɛ and Re-Normalization Group (RNG) turbulence models. The results showed a good agreement between numerical and experimental model as numerical model can be used to optimize of stilling basins.Keywords: experimental and numerical modelling, end adverse slope, flow parameters, USBR II stilling basin
Procedia PDF Downloads 17915481 The Effectiveness of National Fiscal Rules in the Asia-Pacific Countries
Authors: Chiung-Ju Huang, Yuan-Hong Ho
Abstract:
This study utilizes the International Monetary Fund (IMF) Fiscal Rules Dataset focusing on four specific fiscal rules such as expenditure rule, revenue rule, budget balance rule, and debt rule and five main characteristics of each fiscal rule those are monitoring, enforcement, coverage, legal basis, and escape clause to construct the Fiscal Rule Index for nine countries in the Asia-Pacific region from 1996 to 2015. After constructing the fiscal rule index for each country, we utilize the Panel Generalized Method of Moments (Panel GMM) by using the constructed fiscal rule index to examine the effectiveness of fiscal rules in reducing procyclicality. Empirical results show that national fiscal rules have a significantly negative impact on procyclicality of government expenditure. Additionally, stricter fiscal rules combined with high government effectiveness are effective in reducing procyclicality of government expenditure. Results of this study indicate that for nine Asia-Pacific countries, policymakers’ use of fiscal rules and government effectiveness to reducing procyclicality of fiscal policy are effective.Keywords: counter-cyclical policy, fiscal rules, government efficiency, procyclical policy
Procedia PDF Downloads 28015480 A Novel Machining Method and Tool-Path Generation for Bent Mandrel
Authors: Hong Lu, Yongquan Zhang, Wei Fan, Xiangang Su
Abstract:
Bent mandrel has been widely used as precise mould in automobile industry, shipping industry and aviation industry. To improve the versatility and efficiency of turning method of bent mandrel with fixed rotational center, an instantaneous machining model based on cutting parameters and machine dimension is prospered in this paper. The spiral-like tool path generation approach in non-axisymmetric turning process of bent mandrel is developed as well to deal with the error of part-to-part repeatability in existed turning model. The actual cutter-location points are calculated by cutter-contact points, which are obtained from the approach of spiral sweep process using equal-arc-length segment principle in polar coordinate system. The tool offset is set to avoid the interference between tool and work piece is also considered in the machining model. Depend on the spindle rotational angle, synchronization control of X-axis, Z-axis and C-axis is adopted to generate the tool-path of the turning process. The simulation method is developed to generate NC program according to the presented model, which includes calculation of cutter-location points and generation of tool-path of cutting process. With the approach of a bent mandrel taken as an example, the maximum offset of center axis is 4mm in the 3D space. Experiment results verify that the machining model and turning method are appropriate for the characteristics of bent mandrel.Keywords: bent mandrel, instantaneous machining model, simulation method, tool-path generation
Procedia PDF Downloads 33615479 Assessing Effects of an Intervention on Bottle-Weaning and Reducing Daily Milk Intake from Bottles in Toddlers Using Two-Part Random Effects Models
Authors: Yungtai Lo
Abstract:
Two-part random effects models have been used to fit semi-continuous longitudinal data where the response variable has a point mass at 0 and a continuous right-skewed distribution for positive values. We review methods proposed in the literature for analyzing data with excess zeros. A two-part logit-log-normal random effects model, a two-part logit-truncated normal random effects model, a two-part logit-gamma random effects model, and a two-part logit-skew normal random effects model were used to examine effects of a bottle-weaning intervention on reducing bottle use and daily milk intake from bottles in toddlers aged 11 to 13 months in a randomized controlled trial. We show in all four two-part models that the intervention promoted bottle-weaning and reduced daily milk intake from bottles in toddlers drinking from a bottle. We also show that there are no differences in model fit using either the logit link function or the probit link function for modeling the probability of bottle-weaning in all four models. Furthermore, prediction accuracy of the logit or probit link function is not sensitive to the distribution assumption on daily milk intake from bottles in toddlers not off bottles.Keywords: two-part model, semi-continuous variable, truncated normal, gamma regression, skew normal, Pearson residual, receiver operating characteristic curve
Procedia PDF Downloads 34915478 Measurement Errors and Misclassifications in Covariates in Logistic Regression: Bayesian Adjustment of Main and Interaction Effects and the Sample Size Implications
Authors: Shahadut Hossain
Abstract:
Measurement errors in continuous covariates and/or misclassifications in categorical covariates are common in epidemiological studies. Regression analysis ignoring such mismeasurements seriously biases the estimated main and interaction effects of covariates on the outcome of interest. Thus, adjustments for such mismeasurements are necessary. In this research, we propose a Bayesian parametric framework for eliminating deleterious impacts of covariate mismeasurements in logistic regression. The proposed adjustment method is unified and thus can be applied to any generalized linear and non-linear regression models. Furthermore, adjustment for covariate mismeasurements requires validation data usually in the form of either gold standard measurements or replicates of the mismeasured covariates on a subset of the study population. Initial investigation shows that adequacy of such adjustment depends on the sizes of main and validation samples, especially when prevalences of the categorical covariates are low. Thus, we investigate the impact of main and validation sample sizes on the adjusted estimates, and provide a general guideline about these sample sizes based on simulation studies.Keywords: measurement errors, misclassification, mismeasurement, validation sample, Bayesian adjustment
Procedia PDF Downloads 40815477 Effects of Bacterial Inoculants and Enzymes Inoculation on the Fermentation and Aerobic Stability of Potato Hash Silage
Authors: B. D. Nkosi, T. F. Mutavhatsindi, J. J. Baloyi, R. Meeske, T. M. Langa, I. M. M. Malebana, M. D. Motiang
Abstract:
Potato hash (PH), a by-product from food production industry, contains 188.4 g dry matter (DM)/kg and 3.4 g water soluble carbohydrate (WSC)/kg DM, and was mixed with wheat bran (70:30 as is basis) to provide 352 g DM/kg and 315 g WSC/kg DM. The materials were ensiled with or without silage additives in 1.5L anaerobic jars (3 jars/treatment) that were kept at 25-280 C for 3 months. Four types of silages were produced which were: control (no additive, denoted as T1), celluclast enzyme (denoted as T2), emsilage bacterial inoculant (denoted as T3) and silosolve bacterial inoculant (denoted as T4). Three jars per treatment were opened after 3 months of ensiling for the determination of nutritive values, fermentation characteristics and aerobic stability. Aerobic stability was done by exposing silage samples to air for 5 days. The addition of enzyme (T2) was reduced (P<0.05) silage pH, fiber fractions (NDF and ADF) while increasing (P < 0.05) residual WSC and lactic acid (LA) production, compared to other treatments. Silage produced had pH of < 4.0, indications of well-preserved silage. Bacterial inoculation (T3 and T4) improved (P < 0.05) aerobic stability of the silage, as indicated by increased number of hours and lower CO2 production, compared to other treatments. However, the aerobic stability of silage was worsen (P < 0.05) with the addition of an enzyme (T2). Further work to elucidate these effects on nutrient digestion and growth performance on ruminants fed the silage is needed.Keywords: by-products, digestibility, feeds, inoculation, ruminants, silage
Procedia PDF Downloads 43915476 DenseNet and Autoencoder Architecture for COVID-19 Chest X-Ray Image Classification and Improved U-Net Lung X-Ray Segmentation
Authors: Jonathan Gong
Abstract:
Purpose AI-driven solutions are at the forefront of many pathology and medical imaging methods. Using algorithms designed to better the experience of medical professionals within their respective fields, the efficiency and accuracy of diagnosis can improve. In particular, X-rays are a fast and relatively inexpensive test that can diagnose diseases. In recent years, X-rays have not been widely used to detect and diagnose COVID-19. The under use of Xrays is mainly due to the low diagnostic accuracy and confounding with pneumonia, another respiratory disease. However, research in this field has expressed a possibility that artificial neural networks can successfully diagnose COVID-19 with high accuracy. Models and Data The dataset used is the COVID-19 Radiography Database. This dataset includes images and masks of chest X-rays under the labels of COVID-19, normal, and pneumonia. The classification model developed uses an autoencoder and a pre-trained convolutional neural network (DenseNet201) to provide transfer learning to the model. The model then uses a deep neural network to finalize the feature extraction and predict the diagnosis for the input image. This model was trained on 4035 images and validated on 807 separate images from the ones used for training. The images used to train the classification model include an important feature: the pictures are cropped beforehand to eliminate distractions when training the model. The image segmentation model uses an improved U-Net architecture. This model is used to extract the lung mask from the chest X-ray image. The model is trained on 8577 images and validated on a validation split of 20%. These models are calculated using the external dataset for validation. The models’ accuracy, precision, recall, f1-score, IOU, and loss are calculated. Results The classification model achieved an accuracy of 97.65% and a loss of 0.1234 when differentiating COVID19-infected, pneumonia-infected, and normal lung X-rays. The segmentation model achieved an accuracy of 97.31% and an IOU of 0.928. Conclusion The models proposed can detect COVID-19, pneumonia, and normal lungs with high accuracy and derive the lung mask from a chest X-ray with similarly high accuracy. The hope is for these models to elevate the experience of medical professionals and provide insight into the future of the methods used.Keywords: artificial intelligence, convolutional neural networks, deep learning, image processing, machine learning
Procedia PDF Downloads 13015475 The Impact of the Composite Expanded Graphite PCM on the PV Panel Whole Year Electric Output: Case Study Milan
Authors: Hasan A Al-Asadi, Ali Samir, Afrah Turki Awad, Ali Basem
Abstract:
Integrating the phase change material (PCM) with photovoltaic (PV) panels is one of the effective techniques to minimize the PV panel temperature and increase their electric output. In order to investigate the impact of the PCM on the electric output of the PV panels for a whole year, a lumped-distributed parameter model for the PV-PCM module has been developed. This development has considered the impact of the PCM density variation between the solid phase and liquid phase. This contribution will increase the assessment accuracy of the electric output of the PV-PCM module. The second contribution is to assess the impact of the expanded composite graphite-PCM on the PV electric output in Milan for a whole year. The novel one-dimensional model has been solved using MATLAB software. The results of this model have been validated against literature experiment work. The weather and the solar radiation data have been collected. The impact of expanded graphite-PCM on the electric output of the PV panel for a whole year has been investigated. The results indicate this impact has an enhancement rate of 2.39% for the electric output of the PV panel in Milan for a whole year.Keywords: PV panel efficiency, PCM, numerical model, solar energy
Procedia PDF Downloads 17315474 Analytical Solution for Stellar Distance Based on Photon Dominated Cosmic Expansion Model
Authors: Xiaoyun Li, Suoang Longzhou
Abstract:
This paper derives the analytical solution of stellar distance according to its redshift based on the photon-dominated universe expansion model. Firstly, it calculates stellar separation speed and the farthest distance of observable stars via simulation. Then the analytical solution of stellar distance according to its redshift is derived. It shows that when the redshift is large, the stellar distance (and its separation speed) is not proportional to its redshift due to the relativity effect. It also reveals the relationship between stellar age and its redshift. The correctness of the analytical solution is verified by the latest astronomic observations of Ia supernovas in 2020.Keywords: redshift, cosmic expansion model, analytical solution, stellar distance
Procedia PDF Downloads 16115473 Knowledge Audit Model for Requirement Elicitation Process
Authors: Laleh Taheri, Noraini C. Pa, Rusli Abdullah, Salfarina Abdullah
Abstract:
Knowledge plays an important role to the success of any organization. Software development organizations are highly knowledge-intensive organizations especially in their Requirement Elicitation Process (REP). There are several problems regarding communicating and using the knowledge in REP such as misunderstanding, being out of scope, conflicting information and changes of requirements. All of these problems occurred in transmitting the requirements knowledge during REP. Several researches have been done in REP in order to solve the problem towards requirements. Knowledge Audit (KA) approaches were proposed in order to solve managing knowledge in human resources, financial, and manufacturing. There is lack of study applying the KA in requirements elicitation process. Therefore, this paper proposes a KA model for REP in supporting to acquire good requirements.Keywords: knowledge audit, requirement elicitation process, KA model, knowledge in requirement elicitation
Procedia PDF Downloads 34515472 Preference for Housing Services and Rational House Price Bubbles
Authors: Stefanie Jeanette Huber
Abstract:
This paper explores the relevance and implications of preferences for housing services on house price fluctuations through the lens of an overlapping generation’s model. The model implies that an economy whose agents have lower preferences for housing services is characterized with lower expenditure shares on housing services and will tend to experience more frequent and more volatile housing bubbles. These model predictions are tested empirically in the companion paper Housing Booms and Busts - Convergences and Divergences across OECD countries. Between 1970 - 2013, countries who spend less on housing services as a share of total income experienced significantly more housing cycles and the associated housing boom-bust cycles were more violent. Finally, the model is used to study the impact of rental subsidies and help-to-buy schemes on rational housing bubbles. Rental subsidies are found to contribute to the control of housing bubbles, whereas help-to- buy scheme makes the economy more bubble-prone.Keywords: housing bubbles, housing booms and busts, preference for housing services, expenditure shares for housing services, rental and purchase subsidies
Procedia PDF Downloads 29915471 Autonomous Quantum Competitive Learning
Authors: Mohammed A. Zidan, Alaa Sagheer, Nasser Metwally
Abstract:
Real-time learning is an important goal that most of artificial intelligence researches try to achieve it. There are a lot of problems and applications which require low cost learning such as learn a robot to be able to classify and recognize patterns in real time and real-time recall. In this contribution, we suggest a model of quantum competitive learning based on a series of quantum gates and additional operator. The proposed model enables to recognize any incomplete patterns, where we can increase the probability of recognizing the pattern at the expense of the undesired ones. Moreover, these undesired ones could be utilized as new patterns for the system. The proposed model is much better compared with classical approaches and more powerful than the current quantum competitive learning approaches.Keywords: competitive learning, quantum gates, quantum gates, winner-take-all
Procedia PDF Downloads 47215470 Predicting Indonesia External Debt Crisis: An Artificial Neural Network Approach
Authors: Riznaldi Akbar
Abstract:
In this study, we compared the performance of the Artificial Neural Network (ANN) model with back-propagation algorithm in correctly predicting in-sample and out-of-sample external debt crisis in Indonesia. We found that exchange rate, foreign reserves, and exports are the major determinants to experiencing external debt crisis. The ANN in-sample performance provides relatively superior results. The ANN model is able to classify correctly crisis of 89.12 per cent with reasonably low false alarms of 7.01 per cent. In out-of-sample, the prediction performance fairly deteriorates compared to their in-sample performances. It could be explained as the ANN model tends to over-fit the data in the in-sample, but it could not fit the out-of-sample very well. The 10-fold cross-validation has been used to improve the out-of-sample prediction accuracy. The results also offer policy implications. The out-of-sample performance could be very sensitive to the size of the samples, as it could yield a higher total misclassification error and lower prediction accuracy. The ANN model could be used to identify past crisis episodes with some accuracy, but predicting crisis outside the estimation sample is much more challenging because of the presence of uncertainty.Keywords: debt crisis, external debt, artificial neural network, ANN
Procedia PDF Downloads 44315469 Failure Inference and Optimization for Step Stress Model Based on Bivariate Wiener Model
Authors: Soudabeh Shemehsavar
Abstract:
In this paper, we consider the situation under a life test, in which the failure time of the test units are not related deterministically to an observable stochastic time varying covariate. In such a case, the joint distribution of failure time and a marker value would be useful for modeling the step stress life test. The problem of accelerating such an experiment is considered as the main aim of this paper. We present a step stress accelerated model based on a bivariate Wiener process with one component as the latent (unobservable) degradation process, which determines the failure times and the other as a marker process, the degradation values of which are recorded at times of failure. Parametric inference based on the proposed model is discussed and the optimization procedure for obtaining the optimal time for changing the stress level is presented. The optimization criterion is to minimize the approximate variance of the maximum likelihood estimator of a percentile of the products’ lifetime distribution.Keywords: bivariate normal, Fisher information matrix, inverse Gaussian distribution, Wiener process
Procedia PDF Downloads 31715468 The Effects of Different Parameters of Wood Floating Debris on Scour Rate Around Bridge Piers
Authors: Muhanad Al-Jubouri
Abstract:
A local scour is the most important of the several scours impacting bridge performance and security. Even though scour is widespread in bridges, especially during flood seasons, the experimental tests could not be applied to many standard highway bridges. A computational fluid dynamics numerical model was used to solve the problem of calculating local scouring and deposition for non-cohesive silt and clear water conditions near single and double cylindrical piers with the effect of floating debris. When FLOW-3D software is employed with the Rang turbulence model, the Nilsson bed-load transfer equation and fine mesh size are considered. The numerical findings of single cylindrical piers correspond pretty well with the physical model's results. Furthermore, after parameter effectiveness investigates the range of outcomes based on predicted user inputs such as the bed-load equation, mesh cell size, and turbulence model, the final numerical predictions are compared to experimental data. When the findings are compared, the error rate for the deepest point of the scour is equivalent to 3.8% for the single pier example.Keywords: local scouring, non-cohesive, clear water, computational fluid dynamics, turbulence model, bed-load equation, debris
Procedia PDF Downloads 6915467 The Role of Group Size, Public Employees’ Wages and Control Corruption Institutions in a Game-Theoretical Model of Public Corruption
Authors: Pablo J. Valverde, Jaime E. Fernandez
Abstract:
This paper shows under which conditions public corruption can emerge. The theoretical model includes variables such as the public employee wage (w), a control corruption parameter (c), and the group size of interactions (GS) between clusters of public officers and contractors. The system behavior is analyzed using phase diagrams based on combinations of such parameters (c, w, GS). Numerical simulations are implemented in order to contrast analytic results based on Nash equilibria of the theoretical model. Major findings include the functional relationship between wages and network topology, which attempts to reduce the emergence of corrupt behavior.Keywords: public corruption, game theory, complex systems, Nash equilibrium.
Procedia PDF Downloads 24215466 Evaluating the Suitability and Performance of Dynamic Modulus Predictive Models for North Dakota’s Asphalt Mixtures
Authors: Duncan Oteki, Andebut Yeneneh, Daba Gedafa, Nabil Suleiman
Abstract:
Most agencies lack the equipment required to measure the dynamic modulus (|E*|) of asphalt mixtures, necessitating the need to use predictive models. This study compared measured |E*| values for nine North Dakota asphalt mixes using the original Witczak, modified Witczak, and Hirsch models. The influence of temperature on the |E*| models was investigated, and Pavement ME simulations were conducted using measured |E*| and predictions from the most accurate |E*| model. The results revealed that the original Witczak model yielded the lowest Se/Sy and highest R² values, indicating the lowest bias and highest accuracy, while the poorest overall performance was exhibited by the Hirsch model. Using predicted |E*| as inputs in the Pavement ME generated conservative distress predictions compared to using measured |E*|. The original Witczak model was recommended for predicting |E*| for low-reliability pavements in North Dakota.Keywords: asphalt mixture, binder, dynamic modulus, MEPDG, pavement ME, performance, prediction
Procedia PDF Downloads 4815465 Efficiency of Secondary Schools by ICT Intervention in Sylhet Division of Bangladesh
Authors: Azizul Baten, Kamrul Hossain, Abdullah-Al-Zabir
Abstract:
The objective of this study is to develop an appropriate stochastic frontier secondary schools efficiency model by ICT Intervention and to examine the impact of ICT challenges on secondary schools efficiency in the Sylhet division in Bangladesh using stochastic frontier analysis. The Translog stochastic frontier model was found an appropriate than the Cobb-Douglas model in secondary schools efficiency by ICT Intervention. Based on the results of the Cobb-Douglas model, it is found that the coefficient of the number of teachers, the number of students, and teaching ability had a positive effect on increasing the level of efficiency. It indicated that these are related to technical efficiency. In the case of inefficiency effects for both Cobb-Douglas and Translog models, the coefficient of the ICT lab decreased secondary school inefficiency, but the online class in school was found to increase the level of inefficiency. The coefficients of teacher’s preference for ICT tools like multimedia projectors played a contributor role in decreasing the secondary school inefficiency in the Sylhet division of Bangladesh. The interaction effects of the number of teachers and the classrooms, and the number of students and the number of classrooms, the number of students and teaching ability, and the classrooms and teaching ability of the teachers were recorded with the positive values and these have a positive impact on increasing the secondary school efficiency. The overall mean efficiency of urban secondary schools was found at 84.66% for the Translog model, while it was 83.63% for the Cobb-Douglas model. The overall mean efficiency of rural secondary schools was found at 80.98% for the Translog model, while it was 81.24% for the Cobb-Douglas model. So, the urban secondary schools performed better than the rural secondary schools in the Sylhet division. It is observed from the results of the Tobit model that the teacher-student ratio had a positive influence on secondary school efficiency. The teaching experiences of those who have 1 to 5 years and 10 years above, MPO type school, conventional teaching method have had a negative and significant influence on secondary school efficiency. The estimated value of σ-square (0.0625) was different from Zero, indicating a good fit. The value of γ (0.9872) was recorded as positive and it can be interpreted as follows: 98.72 percent of random variation around in secondary school outcomes due to inefficiency.Keywords: efficiency, secondary schools, ICT, stochastic frontier analysis
Procedia PDF Downloads 151