Search results for: grammatical error correction
579 The Identification of Combined Genomic Expressions as a Diagnostic Factor for Oral Squamous Cell Carcinoma
Authors: Ki-Yeo Kim
Abstract:
Trends in genetics are transforming in order to identify differential coexpressions of correlated gene expression rather than the significant individual gene. Moreover, it is known that a combined biomarker pattern improves the discrimination of a specific cancer. The identification of the combined biomarker is also necessary for the early detection of invasive oral squamous cell carcinoma (OSCC). To identify the combined biomarker that could improve the discrimination of OSCC, we explored an appropriate number of genes in a combined gene set in order to attain the highest level of accuracy. After detecting a significant gene set, including the pre-defined number of genes, a combined expression was identified using the weights of genes in a gene set. We used the Principal Component Analysis (PCA) for the weight calculation. In this process, we used three public microarray datasets. One dataset was used for identifying the combined biomarker, and the other two datasets were used for validation. The discrimination accuracy was measured by the out-of-bag (OOB) error. There was no relation between the significance and the discrimination accuracy in each individual gene. The identified gene set included both significant and insignificant genes. One of the most significant gene sets in the classification of normal and OSCC included MMP1, SOCS3 and ACOX1. Furthermore, in the case of oral dysplasia and OSCC discrimination, two combined biomarkers were identified. The combined genomic expression achieved better performance in the discrimination of different conditions than in a single significant gene. Therefore, it could be expected that accurate diagnosis for cancer could be possible with a combined biomarker.Keywords: oral squamous cell carcinoma, combined biomarker, microarray dataset, correlated genes
Procedia PDF Downloads 423578 Numerical Solution of Space Fractional Order Linear/Nonlinear Reaction-Advection Diffusion Equation Using Jacobi Polynomial
Authors: Shubham Jaiswal
Abstract:
During modelling of many physical problems and engineering processes, fractional calculus plays an important role. Those are greatly described by fractional differential equations (FDEs). So a reliable and efficient technique to solve such types of FDEs is needed. In this article, a numerical solution of a class of fractional differential equations namely space fractional order reaction-advection dispersion equations subject to initial and boundary conditions is derived. In the proposed approach shifted Jacobi polynomials are used to approximate the solutions together with shifted Jacobi operational matrix of fractional order and spectral collocation method. The main advantage of this approach is that it converts such problems in the systems of algebraic equations which are easier to be solved. The proposed approach is effective to solve the linear as well as non-linear FDEs. To show the reliability, validity and high accuracy of proposed approach, the numerical results of some illustrative examples are reported, which are compared with the existing analytical results already reported in the literature. The error analysis for each case exhibited through graphs and tables confirms the exponential convergence rate of the proposed method.Keywords: space fractional order linear/nonlinear reaction-advection diffusion equation, shifted Jacobi polynomials, operational matrix, collocation method, Caputo derivative
Procedia PDF Downloads 445577 Coding and Decoding versus Space Diversity for Rayleigh Fading Radio Frequency Channels
Authors: Ahmed Mahmoud Ahmed Abouelmagd
Abstract:
The diversity is the usual remedy of the transmitted signal level variations (Fading phenomena) in radio frequency channels. Diversity techniques utilize two or more copies of a signal and combine those signals to combat fading. The basic concept of diversity is to transmit the signal via several independent diversity branches to get independent signal replicas via time – frequency - space - and polarization diversity domains. Coding and decoding processes can be an alternative remedy for fading phenomena, it cannot increase the channel capacity, but it can improve the error performance. In this paper we propose the use of replication decoding with BCH code class, and Viterbi decoding algorithm with convolution coding; as examples of coding and decoding processes. The results are compared to those obtained from two optimized selection space diversity techniques. The performance of Rayleigh fading channel, as the model considered for radio frequency channels, is evaluated for each case. The evaluation results show that the coding and decoding approaches, especially the BCH coding approach with replication decoding scheme, give better performance compared to that of selection space diversity optimization approaches. Also, an approach for combining the coding and decoding diversity as well as the space diversity is considered, the main disadvantage of this approach is its complexity but it yields good performance results.Keywords: Rayleigh fading, diversity, BCH codes, Replication decoding, convolution coding, viterbi decoding, space diversity
Procedia PDF Downloads 442576 Allometric Models for Biomass Estimation in Savanna Woodland Area, Niger State, Nigeria
Authors: Abdullahi Jibrin, Aishetu Abdulkadir
Abstract:
The development of allometric models is crucial to accurate forest biomass/carbon stock assessment. The aim of this study was to develop a set of biomass prediction models that will enable the determination of total tree aboveground biomass for savannah woodland area in Niger State, Nigeria. Based on the data collected through biometric measurements of 1816 trees and destructive sampling of 36 trees, five species specific and one site specific models were developed. The sample size was distributed equally between the five most dominant species in the study site (Vitellaria paradoxa, Irvingia gabonensis, Parkia biglobosa, Anogeissus leiocarpus, Pterocarpus erinaceous). Firstly, the equations were developed for five individual species. Secondly these five species were mixed and were used to develop an allometric equation of mixed species. Overall, there was a strong positive relationship between total tree biomass and the stem diameter. The coefficient of determination (R2 values) ranging from 0.93 to 0.99 P < 0.001 were realised for the models; with considerable low standard error of the estimates (SEE) which confirms that the total tree above ground biomass has a significant relationship with the dbh. The F-test value for the biomass prediction models were also significant at p < 0.001 which indicates that the biomass prediction models are valid. This study recommends that for improved biomass estimates in the study site, the site specific biomass models should preferably be used instead of using generic models.Keywords: allometriy, biomass, carbon stock , model, regression equation, woodland, inventory
Procedia PDF Downloads 448575 Trusting the Eyes: The Changing Landscape of Eyewitness Testimony
Authors: Manveen Singh
Abstract:
Since the very advent of law enforcement, eyewitness testimony has played a pivotal role in identifying, arresting and convicting suspects. Reliant heavily on the accuracy of human memory, nothing seems to carry more weight with the judiciary than the testimony of an actual witness. The acceptance of eyewitness testimony as a substantive piece of evidence lies embedded in the assumption that the human mind is adept at recording and storing events. Research though, has proven otherwise. Having carried out extensive study in the field of eyewitness testimony for the past 40 years, psychologists have concluded that human memory is fragile and needs to be treated carefully. The question that arises then, is how reliable is eyewitness testimony? The credibility of eyewitness testimony, simply put, depends on several factors leaving it reliable at times while not so much at others. This is further substantiated by the fact that as per scientific research, over 75 percent of all eyewitness testimonies may stand in error; quite a few of these cases resulting in life sentences. Although the advancement of scientific techniques, especially DNA testing, helped overturn many of these eyewitness testimony-based convictions, yet eyewitness identifications continue to form the backbone of most police investigations and courtroom decisions till date. What then is the solution to this long standing concern regarding the accuracy of eyewitness accounts? The present paper shall analyze the linkage between human memory and eyewitness identification as well as look at the various factors governing the credibility of eyewitness testimonies. Furthermore, it shall elaborate upon some best practices developed over the years to help reduce mistaken identifications. Thus, in the process, trace out the changing landscape of eyewitness testimony amidst the evolution of DNA and trace evidence.Keywords: DNA, eyewitness, identification, testimony, evidence
Procedia PDF Downloads 328574 Numerical Study of Jet Impingement Heat Transfer
Authors: A. M. Tiara, Sudipto Chakraborty, S. K. Pal
Abstract:
Impinging jets and their different configurations are important from the viewpoint of the fluid flow characteristics and their influence on heat transfer from metal surfaces due to their complex flow characteristics. Such flow characteristics results in highly variable heat transfer from the surface, resulting in varying cooling rates which affects the mechanical properties including hardness and strength. The overall objective of the current research is to conduct a fundamental investigation of the heat transfer mechanisms for an impinging coolant jet. Numerical simulation of the cooling process gives a detailed analysis of the different parameters involved even though employing Computational Fluid Dynamics (CFD) to simulate the real time process, being a relatively new research area, poses many challenges. The heat transfer mechanism in the current research is actuated by jet cooling. The computational tool used in the ongoing research for simulation of the cooling process is ANSYS Workbench software. The temperature and heat flux distribution along the steel strip with the effect of various flow parameters on the heat transfer rate can be observed in addition to determination of the jet impingement patterns, which is the major aim of the present analysis. Modelling both jet and air atomized cooling techniques using CFD methodology and validating with those obtained experimentally- including trial and error with different models and comparison of cooling rates from both the techniques have been included in this work. Finally some concluding remarks are made that identify some gaps in the available literature that have influenced the path of the current investigation.Keywords: CFD, heat transfer, impinging jets, numerical simulation
Procedia PDF Downloads 235573 Commuters Trip Purpose Decision Tree Based Model of Makurdi Metropolis, Nigeria and Strategic Digital City Project
Authors: Emmanuel Okechukwu Nwafor, Folake Olubunmi Akintayo, Denis Alcides Rezende
Abstract:
Decision tree models are versatile and interpretable machine learning algorithms widely used for both classification and regression tasks, which can be related to cities, whether physical or digital. The aim of this research is to assess how well decision tree algorithms can predict trip purposes in Makurdi, Nigeria, while also exploring their connection to the strategic digital city initiative. The research methodology involves formalizing household demographic and trips information datasets obtained from extensive survey process. Modelling and Prediction were achieved using Python Programming Language and the evaluation metrics like R-squared and mean absolute error were used to assess the decision tree algorithm's performance. The results indicate that the model performed well, with accuracies of 84% and 68%, and low MAE values of 0.188 and 0.314, on training and validation data, respectively. This suggests the model can be relied upon for future prediction. The conclusion reiterates that This model will assist decision-makers, including urban planners, transportation engineers, government officials, and commuters, in making informed decisions on transportation planning and management within the framework of a strategic digital city. Its application will enhance the efficiency, sustainability, and overall quality of transportation services in Makurdi, Nigeria.Keywords: decision tree algorithm, trip purpose, intelligent transport, strategic digital city, travel pattern, sustainable transport
Procedia PDF Downloads 20572 Mathematical Modelling of Ultrasound Pre-Treatment in Microwave Dried Strawberry (Fragaria L.) Slices
Authors: Hilal Uslu, Salih Eroglu, Betul Ozkan, Ozcan Bulantekin, Alper Kuscu
Abstract:
In this study, the strawberry (Fragaria L.) fruits, which were pretreated with ultrasound (US), were worked on in the microwave by using 90W power. Then mathematical modelling was applied to dried fruits by using different experimental thin layer models. The sliced fruits were subjected to ultrasound treatment at a frequency of 40 kHz for 10, 20, and 30 minutes, in an ultrasonic water bath, with a ratio of 1:4 to fruit/water. They are then dried in the microwave (90W). The drying process continued until the product moisture was below 10%. By analyzing the moisture change of the products at a certain time, eight different thin-layer drying models, (Newton, page, modified page, Midilli, Henderson and Pabis, logarithmic, two-term, Wang and Singh) were tested for verification of experimental data. MATLAB R2015a statistical program was used for the modelling, and the best suitable model was determined with R²adj (coefficient of determination of compatibility), and root mean square error (RMSE) values. According to analysis, the drying model that best describes the drying behavior for both drying conditions was determined as the Midilli model by high R²adj and low RMSE values. Control, 10, 20, and 30 min US for groups R²adj and RMSE values was established as respectively; 0,9997- 0,005298; 0,9998- 0,004735; 0,9995- 0,007031; 0,9917-0,02773. In addition, effective diffusion coefficients were calculated for each group and were determined as 3,80x 10⁻⁸, 3,71 x 10⁻⁸, 3,26 x10⁻⁸ ve 3,5 x 10⁻⁸ m/s, respectively.Keywords: mathematical modelling, microwave drying, strawberry, ultrasound
Procedia PDF Downloads 153571 Sensitivity Based Robust Optimization Using 9 Level Orthogonal Array and Stepwise Regression
Authors: K. K. Lee, H. W. Han, H. L. Kang, T. A. Kim, S. H. Han
Abstract:
For the robust optimization of the manufacturing product design, there are design objectives that must be achieved, such as a minimization of the mean and standard deviation in objective functions within the required sensitivity constraints. The authors utilized the sensitivity of objective functions and constraints with respect to the effective design variables to reduce the computational burden associated with the evaluation of the probabilities. The individual mean and sensitivity values could be estimated easily by using the 9 level orthogonal array based response surface models optimized by the stepwise regression. The present study evaluates a proposed procedure from the robust optimization of rubber domes that are commonly used for keyboard switching, by using the 9 level orthogonal array and stepwise regression along with a desirability function. In addition, a new robust optimization process, i.e., the I2GEO (Identify, Integrate, Generate, Explore and Optimize), was proposed on the basis of the robust optimization in rubber domes. The optimized results from the response surface models and the estimated results by using the finite element analysis were consistent within a small margin of error. The standard deviation of objective function is decreasing 54.17% with suggested sensitivity based robust optimization. (Business for Cooperative R&D between Industry, Academy, and Research Institute funded Korea Small and Medium Business Administration in 2017, S2455569)Keywords: objective function, orthogonal array, response surface model, robust optimization, stepwise regression
Procedia PDF Downloads 288570 Use of Statistical Correlations for the Estimation of Shear Wave Velocity from Standard Penetration Test-N-Values: Case Study of Algiers Area
Authors: Soumia Merat, Lynda Djerbal, Ramdane Bahar, Mohammed Amin Benbouras
Abstract:
Along with shear wave, many soil parameters are associated with the standard penetration test (SPT) as a dynamic in situ experiment. Both SPT-N data and geophysical data do not often exist in the same area. Statistical analysis of correlation between these parameters is an alternate method to estimate Vₛ conveniently and without additional investigations or data acquisition. Shear wave velocity is a basic engineering tool required to define dynamic properties of soils. In many instances, engineers opt for empirical correlations between shear wave velocity (Vₛ) and reliable static field test data like standard penetration test (SPT) N value, CPT (Cone Penetration Test) values, etc., to estimate shear wave velocity or dynamic soil parameters. The relation between Vs and SPT- N values of Algiers area is predicted using the collected data, and it is also compared with the previously suggested formulas of Vₛ determination by measuring Root Mean Square Error (RMSE) of each model. Algiers area is situated in high seismic zone (Zone III [RPA 2003: réglement parasismique algerien]), therefore the study is important for this region. The principal aim of this paper is to compare the field measurements of Down-hole test and the empirical models to show which one of these proposed formulas are applicable to predict and deduce shear wave velocity values.Keywords: empirical models, RMSE, shear wave velocity, standard penetration test
Procedia PDF Downloads 338569 Computational Fluid Dynamic Modeling of Mixing Enhancement by Stimulation of Ferrofluid under Magnetic Field
Authors: Neda Azimi, Masoud Rahimi, Faezeh Mohammadi
Abstract:
Computational fluid dynamics (CFD) simulation was performed to investigate the effect of ferrofluid stimulation on hydrodynamic and mass transfer characteristics of two immiscible liquid phases in a Y-micromixer. The main purpose of this work was to develop a numerical model that is able to simulate hydrodynamic of the ferrofluid flow under magnetic field and determine its effect on mass transfer characteristics. A uniform external magnetic field was applied perpendicular to the flow direction. The volume of fluid (VOF) approach was used for simulating the multiphase flow of ferrofluid and two-immiscible liquid flows. The geometric reconstruction scheme (Geo-Reconstruct) based on piecewise linear interpolation (PLIC) was used for reconstruction of the interface in the VOF approach. The mass transfer rate was defined via an equation as a function of mass concentration gradient of the transported species and added into the phase interaction panel using the user-defined function (UDF). The magnetic field was solved numerically by Fluent MHD module based on solving the magnetic induction equation method. CFD results were validated by experimental data and good agreements have been achieved, which maximum relative error for extraction efficiency was about 7.52 %. It was showed that ferrofluid actuation by a magnetic field can be considered as an efficient mixing agent for liquid-liquid two-phase mass transfer in microdevices.Keywords: CFD modeling, hydrodynamic, micromixer, ferrofluid, mixing
Procedia PDF Downloads 196568 A Hybrid Genetic Algorithm and Neural Network for Wind Profile Estimation
Authors: M. Saiful Islam, M. Mohandes, S. Rehman, S. Badran
Abstract:
Increasing necessity of wind power is directing us to have precise knowledge on wind resources. Methodical investigation of potential locations is required for wind power deployment. High penetration of wind energy to the grid is leading multi megawatt installations with huge investment cost. This fact appeals to determine appropriate places for wind farm operation. For accurate assessment, detailed examination of wind speed profile, relative humidity, temperature and other geological or atmospheric parameters are required. Among all of these uncertainty factors influencing wind power estimation, vertical extrapolation of wind speed is perhaps the most difficult and critical one. Different approaches have been used for the extrapolation of wind speed to hub height which are mainly based on Log law, Power law and various modifications of the two. This paper proposes a Artificial Neural Network (ANN) and Genetic Algorithm (GA) based hybrid model, namely GA-NN for vertical extrapolation of wind speed. This model is very simple in a sense that it does not require any parametric estimations like wind shear coefficient, roughness length or atmospheric stability and also reliable compared to other methods. This model uses available measured wind speeds at 10m, 20m and 30m heights to estimate wind speeds up to 100m. A good comparison is found between measured and estimated wind speeds at 30m and 40m with approximately 3% mean absolute percentage error. Comparisons with ANN and power law, further prove the feasibility of the proposed method.Keywords: wind profile, vertical extrapolation of wind, genetic algorithm, artificial neural network, hybrid machine learning
Procedia PDF Downloads 490567 Economic Valuation of Forest Landscape Function Using a Conditional Logit Model
Authors: A. J. Julius, E. Imoagene, O. A. Ganiyu
Abstract:
The purpose of this study is to estimate the economic value of the services and functions rendered by the forest landscape using a conditional logit model. For this study, attributes and levels of forest landscape were chosen; specifically, attributes include topographical forest type, forest type, forest density, recreational factor (side trip, accessibility of valley), and willingness to participate (WTP). Based on these factors, 48 choices sets with balanced and orthogonal form using statistical analysis system (SAS) 9.1 was adopted. The efficiency of the questionnaire was 6.02 (D-Error. 0.1), and choice set and socio-economic variables were analyzed. To reduce the cognitive load of respondents, the 48 choice sets were divided into 4 types in the questionnaire, so that respondents could respond to 12 choice sets, respectively. The study populations were citizens from seven metropolitan cities including Ibadan, Ilorin, Osogbo, etc. and annual WTP per household was asked by using the interview questionnaire, a total of 267 copies were recovered. As a result, Oshogbo had 0.45, and the statistical similarities could not be found except for urban forests, forest density, recreational factor, and level of WTP. Average annual WTP per household for forest landscape was 104,758 Naira (Nigerian currency) based on the outcome from this model, total economic value of the services and functions enjoyed from Nigerian forest landscape has reached approximately 1.6 trillion Naira.Keywords: economic valuation, urban cities, services, forest landscape, logit model, nigeria
Procedia PDF Downloads 132566 Modelling Biological Treatment of Dye Wastewater in SBR Systems Inoculated with Bacteria by Artificial Neural Network
Authors: Yasaman Sanayei, Alireza Bahiraie
Abstract:
This paper presents a systematic methodology based on the application of artificial neural networks for sequencing batch reactor (SBR). The SBR is a fill-and-draw biological wastewater technology, which is specially suited for nutrient removal. Employing reactive dye by Sphingomonas paucimobilis bacteria at sequence batch reactor is a novel approach of dye removal. The influent COD, MLVSS, and reaction time were selected as the process inputs and the effluent COD and BOD as the process outputs. The best possible result for the discrete pole parameter was a= 0.44. In orderto adjust the parameters of ANN, the Levenberg-Marquardt (LM) algorithm was employed. The results predicted by the model were compared to the experimental data and showed a high correlation with R2> 0.99 and a low mean absolute error (MAE). The results from this study reveal that the developed model is accurate and efficacious in predicting COD and BOD parameters of the dye-containing wastewater treated by SBR. The proposed modeling approach can be applied to other industrial wastewater treatment systems to predict effluent characteristics. Note that SBR are normally operated with constant predefined duration of the stages, thus, resulting in low efficient operation. Data obtained from the on-line electronic sensors installed in the SBR and from the control quality laboratory analysis have been used to develop the optimal architecture of two different ANN. The results have shown that the developed models can be used as efficient and cost-effective predictive tools for the system analysed.Keywords: artificial neural network, COD removal, SBR, Sphingomonas paucimobilis
Procedia PDF Downloads 412565 Anthropomorphism in the Primate Mind-Reading Debate: A Critique of Sober's Justification Argument
Authors: Boyun Lee
Abstract:
This study aims to discuss whether anthropomorphism some scientists tend to use in cross-species comparison can be justified epistemologically, especially in the primate mind-reading debate. Concretely, this study critically analyzes Elliott Sober’s argument about mind-reading hypothesis (MRH), an anthropomorphic hypothesis which states that nonhuman primates (e.g., chimpanzee) are mind-readers like humans. Although many scientists consider anthropomorphism as an error and choosing anthropomorphic hypothesis like MRH without any definite evidence invalid, Sober advocates that anthropomorphism is supported by cladistic parsimony that suggests choosing the simplest hypothesis postulating the minimum number of evolutionary changes, which can be justified epistemologically in the mind-reading debate. However, his argument has several problems. First, Reichenbach’s theorem which Sober uses in process of showing that MRH has the higher likelihood than its competing hypothesis, behavior-reading hypothesis (BRH), does not fit in the context of inferring the evolutionary relationship. Second, the phylogenetic tree Sober supports is one of the possible scenarios of MRH, and even without this problem, it is difficult to prove that the possibility nonhuman primate species and human share mind-reading ability is higher than the possibility of the other case, considering how evolution occurs. Consequently, it seems hard to justify anthropomorphism of MRH under Sober’s argument. Some scientists and philosophers say that anthropomorphism sometimes helps observe interesting phenomena or make hypotheses in comparative biology. Nonetheless, we cannot determine that it provides answers about why and how the interesting phenomena appear or which of the hypotheses is better, at least the mind-reading debate, under the current state.Keywords: anthropomorphism, cladistic parsimony, comparative biology, mind-reading debate
Procedia PDF Downloads 172564 Optimizing and Evaluating Performance Quality Control of the Production Process of Disposable Essentials Using Approach Vague Goal Programming
Authors: Hadi Gholizadeh, Ali Tajdin
Abstract:
To have effective production planning, it is necessary to control the quality of processes. This paper aims at improving the performance of the disposable essentials process using statistical quality control and goal programming in a vague environment. That is expressed uncertainty because there is always a measurement error in the real world. Therefore, in this study, the conditions are examined in a vague environment that is a distance-based environment. The disposable essentials process in Kach Company was studied. Statistical control tools were used to characterize the existing process for four factor responses including the average of disposable glasses’ weights, heights, crater diameters, and volumes. Goal programming was then utilized to find the combination of optimal factors setting in a vague environment which is measured to apply uncertainty of the initial information when some of the parameters of the models are vague; also, the fuzzy regression model is used to predict the responses of the four described factors. Optimization results show that the process capability index values for disposable glasses’ average of weights, heights, crater diameters and volumes were improved. Such increasing the quality of the products and reducing the waste, which will reduce the cost of the finished product, and ultimately will bring customer satisfaction, and this satisfaction, will mean increased sales.Keywords: goal programming, quality control, vague environment, disposable glasses’ optimization, fuzzy regression
Procedia PDF Downloads 223563 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO
Procedia PDF Downloads 419562 Cracks Detection and Measurement Using VLP-16 LiDAR and Intel Depth Camera D435 in Real-Time
Authors: Xinwen Zhu, Xingguang Li, Sun Yi
Abstract:
Crack is one of the most common damages in buildings, bridges, roads and so on, which may pose safety hazards. However, cracks frequently happen in structures of various materials. Traditional methods of manual detection and measurement, which are known as subjective, time-consuming, and labor-intensive, are gradually unable to meet the needs of modern development. In addition, crack detection and measurement need be safe considering space limitations and danger. Intelligent crack detection has become necessary research. In this paper, an efficient method for crack detection and quantification using a 3D sensor, LiDAR, and depth camera is proposed. This method works even in a dark environment, which is usual in real-world applications. The LiDAR rapidly spins to scan the surrounding environment and discover cracks through lasers thousands of times per second, providing a rich, 3D point cloud in real-time. The LiDAR provides quite accurate depth information. The precision of the distance of each point can be determined within around ±3 cm accuracy, and not only it is good for getting a precise distance, but it also allows us to see far of over 100m going with the top range models. But the accuracy is still large for some high precision structures of material. To make the depth of crack is much more accurate, the depth camera is in need. The cracks are scanned by the depth camera at the same time. Finally, all data from LiDAR and Depth cameras are analyzed, and the size of the cracks can be quantified successfully. The comparison shows that the minimum and mean absolute percentage error between measured and calculated width are about 2.22% and 6.27%, respectively. The experiments and results are presented in this paper.Keywords: LiDAR, depth camera, real-time, detection and measurement
Procedia PDF Downloads 224561 A Study of Cost and Revenue Earned from Tourist Walking Street Activities in Songkhla City Municipality, Thailand
Authors: Weerawan Marangkun
Abstract:
This study is a survey intended to investigate cost, revenue and factors affecting changes in revenue and to provide guidelines for improving factors affecting changes in revenue from tourist walking street activities in Songkhla City Municipality. Instruments used in this study were structured interviews, using Yaman table (1973) where the random sampling error was+ 10%. The sample consisting of 83 entrepreneurs were drawn from a total population of 272. The purposive sampling method was used. Data were collected during the 6-month period from December 2011 until May 2012. The findings indicate that the cost paid by an entrepreneur in connection with his/her services for tourists is mainly for travel, which stands at about 290 Baht per day. Each entrepreneur earns about 3,850 Baht per day, which means about 400,000 Baht per year. The combined total revenue from walking street tourist activities is about 108.8 million Baht per year. Such activities add economic value to tourist facilities due to expenditures by tourists and provide the entrepreneurs with considerable income. Factors affecting changes in revenue from tourist walking street activities are: the increase in the number of entrepreneurs; the holding of trade fairs, events or interesting shows in the vicinity; and weather conditions (e.g. abundant rainfall, which can contribute to a decrease in the number of tourists). Suggested measures to improve factors affecting changes in the income are: addition or creation of new activities; regulation of operations of the stalls and parking area; and generation of greater publicity through the social network.Keywords: cost, revenue, tourist, walking street
Procedia PDF Downloads 362560 Modeling of Foundation-Soil Interaction Problem by Using Reduced Soil Shear Modulus
Authors: Yesim Tumsek, Erkan Celebi
Abstract:
In order to simulate the infinite soil medium for soil-foundation interaction problem, the essential geotechnical parameter on which the foundation stiffness depends, is the value of soil shear modulus. This parameter directly affects the site and structural response of the considered model under earthquake ground motions. Strain-dependent shear modulus under cycling loads makes difficult to estimate the accurate value in computation of foundation stiffness for the successful dynamic soil-structure interaction analysis. The aim of this study is to discuss in detail how to use the appropriate value of soil shear modulus in the computational analyses and to evaluate the effect of the variation in shear modulus with strain on the impedance functions used in the sub-structure method for idealizing the soil-foundation interaction problem. Herein, the impedance functions compose of springs and dashpots to represent the frequency-dependent stiffness and damping characteristics at the soil-foundation interface. Earthquake-induced vibration energy is dissipated into soil by both radiation and hysteretic damping. Therefore, flexible-base system damping, as well as the variability in shear strengths, should be considered in the calculation of impedance functions for achievement a more realistic dynamic soil-foundation interaction model. In this study, it has been written a Matlab code for addressing these purposes. The case-study example chosen for the analysis is considered as a 4-story reinforced concrete building structure located in Istanbul consisting of shear walls and moment resisting frames with a total height of 12m from the basement level. The foundation system composes of two different sized strip footings on clayey soil with different plasticity (Herein, PI=13 and 16). In the first stage of this study, the shear modulus reduction factor was not considered in the MATLAB algorithm. The static stiffness, dynamic stiffness modifiers and embedment correction factors of two rigid rectangular foundations measuring 2m wide by 17m long below the moment frames and 7m wide by 17m long below the shear walls are obtained for translation and rocking vibrational modes. Afterwards, the dynamic impedance functions of those have been calculated for reduced shear modulus through the developed Matlab code. The embedment effect of the foundation is also considered in these analyses. It can easy to see from the analysis results that the strain induced in soil will depend on the extent of the earthquake demand. It is clearly observed that when the strain range increases, the dynamic stiffness of the foundation medium decreases dramatically. The overall response of the structure can be affected considerably because of the degradation in soil stiffness even for a moderate earthquake. Therefore, it is very important to arrive at the corrected dynamic shear modulus for earthquake analysis including soil-structure interaction.Keywords: clay soil, impedance functions, soil-foundation interaction, sub-structure approach, reduced shear modulus
Procedia PDF Downloads 269559 Software Transactional Memory in a Dynamic Programming Language at Virtual Machine Level
Authors: Szu-Kai Hsu, Po-Ching Lin
Abstract:
As more and more multi-core processors emerge, traditional sequential programming paradigm no longer suffice. Yet only few modern dynamic programming languages can leverage such advantage. Ruby, for example, despite its wide adoption, only includes threads as a simple parallel primitive. The global virtual machine lock of official Ruby runtime makes it impossible to exploit full parallelism. Though various alternative Ruby implementations do eliminate the global virtual machine lock, they only provide developers dated locking mechanism for data synchronization. However, traditional locking mechanism error-prone by nature. Software Transactional Memory is one of the promising alternatives among others. This paper introduces a new virtual machine: GobiesVM to provide a native software transactional memory based solution for dynamic programming languages to exploit parallelism. We also proposed a simplified variation of Transactional Locking II algorithm. The empirical results of our experiments show that support of STM at virtual machine level enables developers to write straightforward code without compromising parallelism or sacrificing thread safety. Existing source code only requires minimal or even none modi cation, which allows developers to easily switch their legacy codebase to a parallel environment. The performance evaluations of GobiesVM also indicate the difference between sequential and parallel execution is significant.Keywords: global interpreter lock, ruby, software transactional memory, virtual machine
Procedia PDF Downloads 285558 Medication Errors in a Juvenile Justice Youth Development Center
Authors: Tanja Salary
Abstract:
This paper discusses a study conducted in a juvenile justice facility regarding medication errors. It includes an introduction to data collected about medication errors in a juvenile justice facility from 2011 - 2019 and explores contributing factors that relate to those errors. The data was obtained from electronic incident records of medication errors that were documented from the years 2011 through 2019. In addition, the presentation reviews both current and historical research of empirical data about patient safety standards and quality care comparing traditional health care facilities to juvenile justice residential facilities and acknowledges a gap in research. The theoretical/conceptual framework for the research study was Bandura and Adams’s self-efficacy theory of behavioral change and Mark Friedman’s results-based accountability theory. Despite the lack of evidence in previous studies addressing medication errors in juvenile justice facilities, this presenter will share information that adds to the body of knowledge, including the potential relationship of medication errors and contributing factors of race and age. Implications for future research include the effect that education and training will have on the communication among juvenile justice staff, including nurses, who administer medications to juveniles to ensure adherence to patient safety standards. There are several opportunities for future research concerning other characteristics about factors that may affect medication administration errors within the residential juvenile justice facility.Keywords: Juvenile justice, medication errors, juveniles, error reduction strategies
Procedia PDF Downloads 66557 Merits and Demerits of Participation of Fellow Examinee as Subjects in Observed Structured Practical Examination in Physiology
Authors: Mohammad U. A. Khan, Md. D. Hossain
Abstract:
Background: Department of Physiology finds difficulty in managing ‘subjects’ in practical procedure. To avoid this difficulty fellow examinees of other group may be used as subjects. Objective: To find out the merits and demerits of using fellow examinees as subjects in the practical procedure. Method: This cross-sectional descriptive study was conducted in the Department of Physiology, Noakhali Medical College, Bangladesh during May-June’14. Forty-two 1st year undergraduate medical students from a selected public medical college of Bangladesh were enrolled for the study purposively. Consent of students and authority was taken. Eighteen of them were selected as subjects and designated as subject-examinees. Other fellow examinees (non-subject) examined their blood pressure and pulse as part of ‘observed structured practical examination’ (OSPE). The opinion of all examinees regarding the merits and demerits of using fellow examinee as subjects in the practical procedure was recorded. Result: Examinees stated that they could perform their practical procedure without nervousness (24/42, 57.14%), accurately and comfortably (14/42, 33.33%) and subjects were made available without wasting time (2/42, 4.76%). Nineteen students (45.24%) found no disadvantage and 2 (4.76%) felt embracing when the subject was of opposite sex. The subject-examinees narrated that they could learn from the errors done by their fellow examinee (11/18, 61.1%). 75% non-subject examinees expressed their willingness to be subject so that they can learn from their fellows’ error. Conclusion: Using fellow examinees as subjects is beneficial for both the non-subject and subject examinees. Funding sources: Navana, Beximco, Unihealth, Square & Acme Pharma, Bangladesh Ltd.Keywords: physiology, teaching, practical, OSPE
Procedia PDF Downloads 151556 Coordinated Interference Canceling Algorithm for Uplink Massive Multiple Input Multiple Output Systems
Authors: Messaoud Eljamai, Sami Hidouri
Abstract:
Massive multiple-input multiple-output (MIMO) is an emerging technology for new cellular networks such as 5G systems. Its principle is to use many antennas per cell in order to maximize the network's spectral efficiency. Inter-cellular interference remains a fundamental problem. The use of massive MIMO will not derogate from the rule. It improves performances only when the number of antennas is significantly greater than the number of users. This, considerably, limits the networks spectral efficiency. In this paper, a coordinated detector for an uplink massive MIMO system is proposed in order to mitigate the inter-cellular interference. The proposed scheme combines the coordinated multipoint technique with an interference-cancelling algorithm. It requires the serving cell to send their received symbols, after processing, decision and error detection, to the interfered cells via a backhaul link. Each interfered cell is capable of eliminating intercellular interferences by generating and subtracting the user’s contribution from the received signal. The resulting signal is more reliable than the original received signal. This allows the uplink massive MIMO system to improve their performances dramatically. Simulation results show that the proposed detector improves system spectral efficiency compared to classical linear detectors.Keywords: massive MIMO, COMP, interference canceling algorithm, spectral efficiency
Procedia PDF Downloads 147555 Evaluation of Vehicle Classification Categories: Florida Case Study
Authors: Ren Moses, Jaqueline Masaki
Abstract:
This paper addresses the need for accurate and updated vehicle classification system through a thorough evaluation of vehicle class categories to identify errors arising from the existing system and proposing modifications. The data collected from two permanent traffic monitoring sites in Florida were used to evaluate the performance of the existing vehicle classification table. The vehicle data were collected and classified by the automatic vehicle classifier (AVC), and a video camera was used to obtain ground truth data. The Federal Highway Administration (FHWA) vehicle classification definitions were used to define vehicle classes from the video and compare them to the data generated by AVC in order to identify the sources of misclassification. Six types of errors were identified. Modifications were made in the classification table to improve the classification accuracy. The results of this study include the development of updated vehicle classification table with a reduction in total error by 5.1%, a step by step procedure to use for evaluation of vehicle classification studies and recommendations to improve FHWA 13-category rule set. The recommendations for the FHWA 13-category rule set indicate the need for the vehicle classification definitions in this scheme to be updated to reflect the distribution of current traffic. The presented results will be of interest to States’ transportation departments and consultants, researchers, engineers, designers, and planners who require accurate vehicle classification information for planning, designing and maintenance of transportation infrastructures.Keywords: vehicle classification, traffic monitoring, pavement design, highway traffic
Procedia PDF Downloads 180554 Local Differential Privacy-Based Data-Sharing Scheme for Smart Utilities
Authors: Veniamin Boiarkin, Bruno Bogaz Zarpelão, Muttukrishnan Rajarajan
Abstract:
The manufacturing sector is a vital component of most economies, which leads to a large number of cyberattacks on organisations, whereas disruption in operation may lead to significant economic consequences. Adversaries aim to disrupt the production processes of manufacturing companies, gain financial advantages, and steal intellectual property by getting unauthorised access to sensitive data. Access to sensitive data helps organisations to enhance the production and management processes. However, the majority of the existing data-sharing mechanisms are either susceptible to different cyber attacks or heavy in terms of computation overhead. In this paper, a privacy-preserving data-sharing scheme for smart utilities is proposed. First, a customer’s privacy adjustment mechanism is proposed to make sure that end-users have control over their privacy, which is required by the latest government regulations, such as the General Data Protection Regulation. Secondly, a local differential privacy-based mechanism is proposed to ensure the privacy of the end-users by hiding real data based on the end-user preferences. The proposed scheme may be applied to different industrial control systems, whereas in this study, it is validated for energy utility use cases consisting of smart, intelligent devices. The results show that the proposed scheme may guarantee the required level of privacy with an expected relative error in utility.Keywords: data-sharing, local differential privacy, manufacturing, privacy-preserving mechanism, smart utility
Procedia PDF Downloads 76553 Integration of Virtual Learning of Induction Machines for Undergraduates
Authors: Rajesh Kumar, Puneet Aggarwal
Abstract:
In context of understanding problems faced by undergraduate students while carrying out laboratory experiments dealing with high voltages, it was found that most of the students are hesitant to work directly on machine. The reason is that error in the circuitry might lead to deterioration of machine and laboratory instruments. So, it has become inevitable to include modern pedagogic techniques for undergraduate students, which would help them to first carry out experiment in virtual system and then to work on live circuit. Further advantages include that students can try out their intuitive ideas and perform in virtual environment, hence leading to new research and innovations. In this paper, virtual environment used is of MATLAB/Simulink for three-phase induction machines. The performance analysis of three-phase induction machine is carried out using virtual environment which includes Direct Current (DC) Test, No-Load Test, and Block Rotor Test along with speed torque characteristics for different rotor resistances and input voltage, respectively. Further, this paper carries out computer aided teaching of basic Voltage Source Inverter (VSI) drive circuitry. Hence, this paper gave undergraduates a clearer view of experiments performed on virtual machine (No-Load test, Block Rotor test and DC test, respectively). After successful implementation of basic tests, VSI circuitry is implemented, and related harmonic distortion (THD) and Fast Fourier Transform (FFT) of current and voltage waveform are studied.Keywords: block rotor test, DC test, no load test, virtual environment, voltage source inverter
Procedia PDF Downloads 354552 Anomaly Detection in Financial Markets Using Tucker Decomposition
Authors: Salma Krafessi
Abstract:
The financial markets have a multifaceted, intricate environment, and enormous volumes of data are produced every day. To find investment possibilities, possible fraudulent activity, and market oddities, accurate anomaly identification in this data is essential. Conventional methods for detecting anomalies frequently fail to capture the complex organization of financial data. In order to improve the identification of abnormalities in financial time series data, this study presents Tucker Decomposition as a reliable multi-way analysis approach. We start by gathering closing prices for the S&P 500 index across a number of decades. The information is converted to a three-dimensional tensor format, which contains internal characteristics and temporal sequences in a sliding window structure. The tensor is then broken down using Tucker Decomposition into a core tensor and matching factor matrices, allowing latent patterns and relationships in the data to be captured. A possible sign of abnormalities is the reconstruction error from Tucker's Decomposition. We are able to identify large deviations that indicate unusual behavior by setting a statistical threshold. A thorough examination that contrasts the Tucker-based method with traditional anomaly detection approaches validates our methodology. The outcomes demonstrate the superiority of Tucker's Decomposition in identifying intricate and subtle abnormalities that are otherwise missed. This work opens the door for more research into multi-way data analysis approaches across a range of disciplines and emphasizes the value of tensor-based methods in financial analysis.Keywords: tucker decomposition, financial markets, financial engineering, artificial intelligence, decomposition models
Procedia PDF Downloads 69551 Computational Modeling of Heat Transfer from a Horizontal Array Cylinders for Low Reynolds Numbers
Authors: Ovais U. Khan, G. M. Arshed, S. A. Raza, H. Ali
Abstract:
A numerical model based on the computational fluid dynamics (CFD) approach is developed to investigate heat transfer across a longitudinal row of six circular cylinders. The momentum and energy equations are solved using the finite volume discretization technique. The convective terms are discretized using a second-order upwind methodology, whereas diffusion terms are discretized using a central differencing scheme. The second-order implicit technique is utilized to integrate time. Numerical simulations have been carried out for three different values of free stream Reynolds number (ReD) 100, 200, 300 and two different values of dimensionless longitudinal pitch ratio (SL/D) 1.5, 2.5 to demonstrate the fluid flow and heat transfer behavior. Numerical results are validated with the analytical findings reported in the literature and have been found to be in good agreement. The maximum percentage error in values of the average Nusselt number obtained from the numerical and analytical solutions is in the range of 10% for the free stream Reynolds number up to 300. It is demonstrated that the average Nusselt number for the array of cylinders increases with increasing the free stream Reynolds number and dimensionless longitudinal pitch ratio. The information generated would be useful in the design of more efficient heat exchangers or other fluid systems involving arrays of cylinders.Keywords: computational fluid dynamics, array of cylinders, longitudinal pitch ratio, finite volume method, incompressible navier-stokes equations
Procedia PDF Downloads 85550 Utilizing Computational Fluid Dynamics in the Analysis of Natural Ventilation in Buildings
Authors: A. W. J. Wong, I. H. Ibrahim
Abstract:
Increasing urbanisation has driven building designers to incorporate natural ventilation in the designs of sustainable buildings. This project utilises Computational Fluid Dynamics (CFD) to investigate the natural ventilation of an academic building, SIT@SP, using an assessment criterion based on daily mean temperature and mean velocity. The areas of interest are the pedestrian level of first and fourth levels of the building. A reference case recommended by the Architectural Institute of Japan was used to validate the simulation model. The validated simulation model was then used for coupled simulations on SIT@SP and neighbouring geometries, under two wind speeds. Both steady and transient simulations were used to identify differences in results. Steady and transient results are agreeable with the transient simulation identifying peak velocities during flow development. Under a lower wind speed, the first level was sufficiently ventilated while the fourth level was not. The first level has excessive wind velocities in the higher wind speed and the fourth level was adequately ventilated. Fourth level flow velocity was consistently lower than those of the first level. This is attributed to either simulation model error or poor building design. SIT@SP is concluded to have a sufficiently ventilated first level and insufficiently ventilated fourth level. Future works for this project extend to modifying the urban geometry, simulation model improvements, evaluation using other assessment metrics and extending the area of interest to the entire building.Keywords: buildings, CFD Simulations, natural ventilation, urban airflow
Procedia PDF Downloads 221