Search results for: accuracy and consistency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4123

Search results for: accuracy and consistency

3223 Text Localization in Fixed-Layout Documents Using Convolutional Networks in a Coarse-to-Fine Manner

Authors: Beier Zhu, Rui Zhang, Qi Song

Abstract:

Text contained within fixed-layout documents can be of great semantic value and so requires a high localization accuracy, such as ID cards, invoices, cheques, and passports. Recently, algorithms based on deep convolutional networks achieve high performance on text detection tasks. However, for text localization in fixed-layout documents, such algorithms detect word bounding boxes individually, which ignores the layout information. This paper presents a novel architecture built on convolutional neural networks (CNNs). A global text localization network and a regional bounding-box regression network are introduced to tackle the problem in a coarse-to-fine manner. The text localization network simultaneously locates word bounding points, which takes the layout information into account. The bounding-box regression network inputs the features pooled from arbitrarily sized RoIs and refine the localizations. These two networks share their convolutional features and are trained jointly. A typical type of fixed-layout documents: ID cards, is selected to evaluate the effectiveness of the proposed system. These networks are trained on data cropped from nature scene images, and synthetic data produced by a synthetic text generation engine. Experiments show that our approach locates high accuracy word bounding boxes and achieves state-of-the-art performance.

Keywords: bounding box regression, convolutional networks, fixed-layout documents, text localization

Procedia PDF Downloads 185
3222 The Damage Assessment of Industrial Buildings Located on Clayey Soils Using in-Situ Tests

Authors: Ismail Akkaya, Mucip Tapan, Ali Ozvan

Abstract:

Some of the industrially prefabricated buildings located on clayey soils were damaged due to soil conditions. The reasons of these damages are generally due to different settlement capacity, the different plasticity of soils and the level of ground water. The aim of this study is to determine the source of these building damages by conducting in situ tests. Therefore, pressuremeter test, which is one of the borehole loading test conducted to determine the properties of soils under the foundations and Standart Penetration Test (SPT). The results of these two field tests were then used to accurately obtain the consistency and firmness of soils. Pressuremeter Deformation Module (EM) and Net Limiting Pressure (PL) of soils were calculated after the pressuremeter tests. These values were then compared with the SPT (N30) and SPT (N60) results. An empirical equation was developed to obtain EM and PL values of such soils from SPT test results. These values were then used to calculate soil bearing capacity as well as the soil settlement. Finally, the relationship between the foundation settlement and the damage of these buildings were checked. It was found that calculated settlement values were almost the same as measured settlement values.

Keywords: damaged building, pressuremeter, standard penetration test, low and high plasticity clay

Procedia PDF Downloads 309
3221 Recurrent Patterns of Netspeak among Selected Nigerians on WhatsApp Platform: A Quest for Standardisation

Authors: Lily Chimuanya, Esther Ajiboye, Emmanuel Uba

Abstract:

One of the consequences of online communication is the birth of new orthography genres characterised by novel conventions of abbreviation and acronyms usually referred to as Netspeak. Netspeak, also known as internet slang, is a style of writing mainly used in online communication to limit the length of text characters and to save time. The aim of this study is to evaluate how second language users of the English language have internalised this new convention of writing; identify the recurrent patterns of Netspeak; and assess the consistency of the use of the identified patterns in relation to their meanings. The study is corpus-based, and data drawn from WhatsApp chart pages of selected groups of Nigerian English speakers show a large occurrence of inconsistencies in the patterns of Netspeak and their meanings. The study argues that rather than emphasise the negative impact of Netspeak on the communicative competence of second language users, studies should focus on suggesting models as yardsticks for standardising the usage of Netspeak and indeed all other emerging language conventions resulting from online communication. This stance stems from the inevitable global language transformation that is eminent with the coming of age of information technology.

Keywords: abbreviation, acronyms, Netspeak, online communication, standardisation

Procedia PDF Downloads 378
3220 The Comparison of the Reliability Margin Measure for the Different Concepts in the Slope Analysis

Authors: Filip Dodigovic, Kreso Ivandic, Damir Stuhec, S. Strelec

Abstract:

The general difference analysis between the former and new design concepts in geotechnical engineering is carried out. The application of new regulations results in the need for real adaptation of the computation principles of limit states, i.e. by providing a uniform way of analyzing engineering tasks. Generally, it is not possible to unambiguously match the limit state verification procedure with those in the construction engineering. The reasons are the inability to fully consistency of the common probabilistic basis of the analysis, and the fundamental effect of material properties on the value of actions and the influence of actions on resistance. Consequently, it is not possible to apply separate factorization with partial coefficients, as in construction engineering. For the slope stability analysis design procedures problems in the light of the use of limit states in relation to the concept of allowable stresses is detailed in. The quantifications of the safety margins in the slope stability analysis for both approaches is done. When analyzing the stability of the slope, by the strict application of the adopted forms from the new regulations for significant external temporary and/or seismic actions, the equivalent margin of safety is increased. The consequence is the emergence of more conservative solutions.

Keywords: allowable pressure, Eurocode 7, limit states, slope stability

Procedia PDF Downloads 327
3219 Artificial Intelligence in Melanoma Prognosis: A Narrative Review

Authors: Shohreh Ghasemi

Abstract:

Introduction: Melanoma is a complex disease with various clinical and histopathological features that impact prognosis and treatment decisions. Traditional methods of melanoma prognosis involve manual examination and interpretation of clinical and histopathological data by dermatologists and pathologists. However, the subjective nature of these assessments can lead to inter-observer variability and suboptimal prognostic accuracy. AI, with its ability to analyze vast amounts of data and identify patterns, has emerged as a promising tool for improving melanoma prognosis. Methods: A comprehensive literature search was conducted to identify studies that employed AI techniques for melanoma prognosis. The search included databases such as PubMed and Google Scholar, using keywords such as "artificial intelligence," "melanoma," and "prognosis." Studies published between 2010 and 2022 were considered. The selected articles were critically reviewed, and relevant information was extracted. Results: The review identified various AI methodologies utilized in melanoma prognosis, including machine learning algorithms, deep learning techniques, and computer vision. These techniques have been applied to diverse data sources, such as clinical images, dermoscopy images, histopathological slides, and genetic data. Studies have demonstrated the potential of AI in accurately predicting melanoma prognosis, including survival outcomes, recurrence risk, and response to therapy. AI-based prognostic models have shown comparable or even superior performance compared to traditional methods.

Keywords: artificial intelligence, melanoma, accuracy, prognosis prediction, image analysis, personalized medicine

Procedia PDF Downloads 68
3218 Decomposition of Funds Transfer Pricing Components in Islamic Bank: The Exposure Effect of Shariah Non-Compliant Event Rectification Process

Authors: Azrul Azlan Iskandar Mirza

Abstract:

The purpose of Funds Transfer Pricing (FTP) for Islamic Bank is to promote prudent liquidity risk-taking behavior of business units. The acquirer of stable deposits will be rewarded whilst a business unit that generates long-term assets will be charged for added liquidity funding risks. In the end, it promotes risk-adjusted pricing by incorporating profit rate risk and liquidity risk component in the product pricing. However, in the event of Shariah non-compliant (SNCE), FTP components will be examined in the rectification plan especially when Islamic banks need to purify the non-compliance income. The finding shows that the determination between actual and provision cost will defer the decision among Shariah committee in Islamic banks. This paper will review each of FTP components to ensure the classification of actual and provision costs reflect the decision on rectification process on SNCE. This will benefit future decision and its consistency of Islamic banks.

Keywords: fund transfer pricing, Islamic banking, Islamic finance, shariah non-compliant event

Procedia PDF Downloads 184
3217 A Sensor Placement Methodology for Chemical Plants

Authors: Omid Ataei Nia, Karim Salahshoor

Abstract:

In this paper, a new precise and reliable sensor network methodology is introduced for unit processes and operations using the Constriction Coefficient Particle Swarm Optimization (CPSO) method. CPSO is introduced as a new search engine for optimal sensor network design purposes. Furthermore, a Square Root Unscented Kalman Filter (SRUKF) algorithm is employed as a new data reconciliation technique to enhance the stability and accuracy of the filter. The proposed design procedure incorporates precision, cost, observability, reliability together with importance-of-variables (IVs) as a novel measure in Instrumentation Criteria (IC). To the best of our knowledge, no comprehensive approach has yet been proposed in the literature to take into account the importance of variables in the sensor network design procedure. In this paper, specific weight is assigned to each sensor, measuring a process variable in the sensor network to indicate the importance of that variable over the others to cater to the ultimate sensor network application requirements. A set of distinct scenarios has been conducted to evaluate the performance of the proposed methodology in a simulated Continuous Stirred Tank Reactor (CSTR) as a highly nonlinear process plant benchmark. The obtained results reveal the efficacy of the proposed method, leading to significant improvement in accuracy with respect to other alternative sensor network design approaches and securing the definite allocation of sensors to the most important process variables in sensor network design as a novel achievement.

Keywords: constriction coefficient PSO, importance of variable, MRMSE, reliability, sensor network design, square root unscented Kalman filter

Procedia PDF Downloads 152
3216 The Legal and Regulatory Gaps of Blockchain-Enabled Energy Prosumerism

Authors: Karisma Karisma, Pardis Moslemzadeh Tehrani

Abstract:

This study aims to conduct a high-level strategic dialogue on the lack of consensus, consistency, and legal certainty regarding blockchain-based energy prosumerism so that appropriate institutional and governance structures can be put in place to address the inadequacies and gaps in the legal and regulatory framework. The drive to achieve national and global decarbonization targets is a driving force behind climate goals and policies under the Paris Agreement. In recent years, efforts to ‘demonopolize’ and ‘decentralize’ energy generation and distribution have driven the energy transition toward decentralized systems, invoking concepts such as ownership, sovereignty, and autonomy of RE sources. The emergence of individual and collective forms of prosumerism and the rapid diffusion of blockchain is expected to play a critical role in the decarbonization and democratization of energy systems. However, there is a ‘regulatory void’ relating to individual and collective forms of prosumerism that could prevent the rapid deployment of blockchain systems and potentially stagnate the operationalization of blockchain-enabled energy sharing and trading activities. The application of broad and facile regulatory fixes may be insufficient to address the major regulatory gaps. First, to the authors’ best knowledge, the concepts and elements circumjacent to individual and collective forms of prosumerism have not been adequately described in the legal frameworks of many countries. Second, there is a lack of legal certainty regarding the creation and adaptation of business models in a highly regulated and centralized energy system, which inhibits the emergence of prosumer-driven niche markets. There are also current and prospective challenges relating to the legal status of blockchain-based platforms for facilitating energy transactions, anticipated with the diffusion of blockchain technology. With the rise of prosumerism in the energy sector, the areas of (a) network charges, (b) energy market access, (c) incentive schemes, (d) taxes and levies, and (e) licensing requirements are still uncharted territories in many countries. The uncertainties emanating from this area pose a significant hurdle to the widespread adoption of blockchain technology, a complementary technology that offers added value and competitive advantages for energy systems. The authors undertake a conceptual and theoretical investigation to elucidate the lack of consensus, consistency, and legal certainty in the study of blockchain-based prosumerism. In addition, the authors set an exploratory tone to the discussion by taking an analytically eclectic approach that builds on multiple sources and theories to delve deeper into this topic. As an interdisciplinary study, this research accounts for the convergence of regulation, technology, and the energy sector. The study primarily adopts desk research, which examines regulatory frameworks and conceptual models for crucial policies at the international level to foster an all-inclusive discussion. With their reflections and insights into the interaction of blockchain and prosumerism in the energy sector, the authors do not aim to develop definitive regulatory models or instrument designs, but to contribute to the theoretical dialogue to navigate seminal issues and explore different nuances and pathways. Given the emergence of blockchain-based energy prosumerism, identifying the challenges, gaps and fragmentation of governance regimes is key to facilitating global regulatory transitions.

Keywords: blockchain technology, energy sector, prosumer, legal and regulatory.

Procedia PDF Downloads 175
3215 Improving the Technology of Assembly by Use of Computer Calculations

Authors: Mariya V. Yanyukina, Michael A. Bolotov

Abstract:

Assembling accuracy is the degree of accordance between the actual values of the parameters obtained during assembly, and the values specified in the assembly drawings and technical specifications. However, the assembling accuracy depends not only on the quality of the production process but also on the correctness of the assembly process. Therefore, preliminary calculations of assembly stages are carried out to verify the correspondence of real geometric parameters to their acceptable values. In the aviation industry, most calculations involve interacting dimensional chains. This greatly complicates the task. Solving such problems requires a special approach. The purpose of this article is to carry out the problem of improving the technology of assembly of aviation units by use of computer calculations. One of the actual examples of the assembly unit, in which there is an interacting dimensional chain, is the turbine wheel of gas turbine engine. Dimensional chain of turbine wheel is formed by geometric parameters of disk and set of blades. The interaction of the dimensional chain consists in the formation of two chains. The first chain is formed by the dimensions that determine the location of the grooves for the installation of the blades, and the dimensions of the blade roots. The second dimensional chain is formed by the dimensions of the airfoil shroud platform. The interaction of the dimensional chain of the turbine wheel is the interdependence of the first and second chains by means of power circuits formed by a plurality of middle parts of the turbine blades. The timeliness of the calculation of the dimensional chain of the turbine wheel is the need to improve the technology of assembly of this unit. The task at hand contains geometric and mathematical components; therefore, its solution can be implemented following the algorithm: 1) research and analysis of production errors by geometric parameters; 2) development of a parametric model in the CAD system; 3) creation of set of CAD-models of details taking into account actual or generalized distributions of errors of geometrical parameters; 4) calculation model in the CAE-system, loading of various combinations of models of parts; 5) the accumulation of statistics and analysis. The main task is to pre-simulate the assembly process by calculating the interacting dimensional chains. The article describes the approach to the solution from the point of view of mathematical statistics, implemented in the software package Matlab. Within the framework of the study, there are data on the measurement of the components of the turbine wheel-blades and disks, as a result of which it is expected that the assembly process of the unit will be optimized by solving dimensional chains.

Keywords: accuracy, assembly, interacting dimension chains, turbine

Procedia PDF Downloads 367
3214 Validation of Electrical Field Effect on Electrostatic Desalter Modeling with Experimental Laboratory Data

Authors: Fatemeh Yazdanmehr, Iulian Nistor

Abstract:

The scope of the current study is the evaluation of the electric field effect on electrostatic desalting mathematical modeling with laboratory data. This research study was focused on developing a model for an existing operation desalting unit of one of the Iranian heavy oil field with a 75 MBPD production capacity. The high temperature of inlet oil to dehydration unit reduces the oil recovery, so the mathematical modeling of desalter operation parameters is very significant. The existing production unit operating data has been used for the accuracy of the mathematical desalting plant model. The inlet oil temperature to desalter was decreased from 110 to 80°C, and the desalted electrical field was increased from 0.75 to 2.5 Kv/cm. The model result shows that the desalter parameter changes meet the water-oil specification and also the oil production and consequently annual income is increased. In addition to that, changing desalter operation conditions reduces environmental footprint because of flare gas reduction. Following to specify the accuracy of selected electrostatic desalter electrical field, laboratory data has been used. Experimental data are used to ensure the effect of electrical field change on desalter. Therefore, the lab test is done on a crude oil sample. The results include the dehydration efficiency in the presence of a demulsifier and under electrical field (0.75 Kv) conditions at various temperatures. Comparing lab experimental and electrostatic desalter mathematical model results shows 1-3 percent acceptable error which confirms the validity of desalter specification and operation conditions changes.

Keywords: desalter, electrical field, demulsification, mathematical modeling, water-oil separation

Procedia PDF Downloads 127
3213 [Keynote Talk]: sEMG Interface Design for Locomotion Identification

Authors: Rohit Gupta, Ravinder Agarwal

Abstract:

Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.

Keywords: classifiers, feature selection, locomotion, sEMG

Procedia PDF Downloads 284
3212 The European Legislation on End-of-Waste

Authors: Claudio D'Alonzo

Abstract:

According to recent tendencies, progress on resource efficiency is possible and it will lead to economic, environmental, and social benefits. The passage to a circular economy system, in which all the materials and energy will maintain their value for as long as possible, waste is reduced and only a few resources are used, is one of the most relevant parts of the European Union's environmental policy to develop a sustainable, competitive and low-carbon economy. A definition of circular economy can be found in Decision 1386/2013/EU of the European Parliament and of the Council on a General Union Environment Action Programme to 2020 named “Living well, within the limits of our planet”. The purpose of renewing waste management systems in the UE and making the European model one of the most effective in the world, a revised waste legislative framework entered into force in July 2018. Regarding the Italian legislation, the laws to be modified are the Legislative Decree 3 April 2006, n. 152 and the laws ruling waste management, end-of-waste, by-products and, the regulatory principles regarding circular economy. European rules on end-of-waste are not fully harmonised and so there are legal challenges. The target to be achieved is full consistency between the laws implementing waste and chemicals policies. Only in this way, materials will be safe, fit-for-purpose and designed for durability; additionally, they will have a low environmental impact.

Keywords: circular economy, end-of-waste, legislation, secondary raw materials

Procedia PDF Downloads 73
3211 Ranking of Managerial Parameters Impacting upon Performance of Football Referees in Iran

Authors: Mohammad Reza Boromand, Masoud Moradi, Amin Eskandari

Abstract:

The present study attempts to determine ranking of managerial parameters impacting upon performance of football referees in Iran. The population consisted of all referees in Leagues 1, 2 and 3 as well as super league of Iran (N=273), of which we selected 160 referees and assistant referees in 2013-2014. A research-designed questionnaire was used for data collection which was divided into two sections: (1) Demographic details (age range, Marital status, employment, refereeing experience, education level, refereeing level and proficiency) and (2) items related to parameters impacting upon performance of referees (structural parameters, operational parameters, environmental parameters, temporal parameters, economic parameters, facilities and tools, personal performance and performance evaluation). Internal consistency was calculated by Cronbach's alpha (r=0.85). For data analysis, we performed Freedman's Test and used SPSS software (α>0.05), along with descriptive statistics. The findings showed the following ranking for the above-mentioned managerial parameters: Facilities and tools, personal performance, economic parameters, structural parameters, operational parameters, environmental parameters, temporal parameters, and performance evaluation.

Keywords: Iran, football referees, managerial parameters, performance

Procedia PDF Downloads 560
3210 Effect of Needle Height on Discharge Coefficient and Cavitation Number

Authors: Mohammadreza Nezamirad, Sepideh Amirahmadian, Nasim Sabetpour, Azadeh Yazdi, Amirmasoud Hamedi

Abstract:

Cavitation inside diesel injector nozzle is investigated using Reynolds-Stress-Navier Stokes equations. Schnerr-Sauer cavitation model is used for modeling cavitation inside diesel injector nozzle. The carrying fluid utilized in the current study is diesel fuel. The flow is verified at the beginning by comparing with the previous experimental data, and it was found that K-Epsilon turbulent model could lead to a better accuracy comparing to K-Omega turbulent model. Moreover, the mass flow rate obtained numerically is compared with the experimental value, and the discrepancy was found to be less than 5 percent which shows the accuracy of the current results. Finally, a real-size four-hole nozzle is investigated, and the flow inside it is visualized based on velocity profile, discharge coefficient, and cavitation number. It was found that the mesh density could be reduced significantly by utilizing periodic boundary conditions. Velocity contour at the mid nozzle showed that the maximum value of velocity occurs at the end of the needle before entering the orifice area. Last but not least, at the same boundary conditions, when different needle heights were utilized, it was found that as needle height increases with an increase in cavitation number, discharge coefficient increases, while the mentioned increases are more tangible at smaller values of needle heights.

Keywords: cavitation, diesel fuel, CFD, real size nozzle, mass flow rate

Procedia PDF Downloads 139
3209 Diagnostic Accuracy of the Tuberculin Skin Test for Tuberculosis Diagnosis: Interest of Using ROC Curve and Fagan’s Nomogram

Authors: Nouira Mariem, Ben Rayana Hazem, Ennigrou Samir

Abstract:

Background and aim: During the past decade, the frequency of extrapulmonary forms of tuberculosis has increased. These forms are under-diagnosed using conventional tests. The aim of this study was to evaluate the performance of the Tuberculin Skin Test (TST) for the diagnosis of tuberculosis, using the ROC curve and Fagan’s Nomogram methodology. Methods: This was a case-control, multicenter study in 11 anti-tuberculosis centers in Tunisia, during the period from June to November2014. The cases were adults aged between 18 and 55 years with confirmed tuberculosis. Controls were free from tuberculosis. A data collection sheet was filled out and a TST was performed for each participant. Diagnostic accuracy measures of TST were estimated using ROC curve and Area Under Curve to estimate sensitivity and specificity of a determined cut-off point. Fagan’s nomogram was used to estimate its predictive values. Results: Overall, 1053 patients were enrolled, composed of 339 cases (sex-ratio (M/F)=0.87) and 714 controls (sex-ratio (M/F)=0.99). The mean age was 38.3±11.8 years for cases and 33.6±11 years for controls. The mean diameter of the TST induration was significantly higher among cases than controls (13.7mm vs.6.2mm;p=10-6). Area Under Curve was 0.789 [95% CI: 0.758-0.819; p=0.01], corresponding to a moderate discriminating power for this test. The most discriminative cut-off value of the TST, which were associated with the best sensitivity (73.7%) and specificity (76.6%) couple was about 11 mm with a Youden index of 0.503. Positive and Negative predictive values were 3.11% and 99.52%, respectively. Conclusion: In view of these results, we can conclude that the TST can be used for tuberculosis diagnosis with a good sensitivity and specificity. However, the skin induration measurement and its interpretation is operator dependent and remains difficult and subjective. The combination of the TST with another test such as the Quantiferon test would be a good alternative.

Keywords: tuberculosis, tuberculin skin test, ROC curve, cut-off

Procedia PDF Downloads 58
3208 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon

Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn

Abstract:

The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.

Keywords: land use and land cover change, change detection, image processing, support vector machines

Procedia PDF Downloads 121
3207 Profiling Risky Code Using Machine Learning

Authors: Zunaira Zaman, David Bohannon

Abstract:

This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.

Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties

Procedia PDF Downloads 99
3206 Possibilities of Using Chia Seeds in Fermented Beverages Made from Mare’s and Cow’s Milk

Authors: Nancy Mahmoud, Joanna Teichert

Abstract:

Nowadays, fermented milk containing probiotic microorganisms is fundamental to human health. The changes in the properties of fermented milk during storage influence the quality and consumer acceptability. This study aimed to evaluate the effect of 1.5 % of chia seeds on the chemical, physical and sensory properties of fermented cow’s and mare’s milk for two weeks at 4°C. The results showed that the pH of cow’s milk drops significantly at the 2nd hour, but mare's milk drops significantly at the 6th hour. The acidity of both types of milk increased as the storage time progressed. Adding chia seeds increased firmness significantly and improved color and consistency. A decrease in brightness (L*), an increase in redness (a*), and yellowness (b*) during storage were observed. Our study showed that the chia seeds have more effect on reducing the brightness of fermented mare milk than fermented cow milk. Analysis of taste and smell parameters showed that after adding chia seeds, the scores changed and became much higher. The sour taste of fermented milk had reduced this positively affected the acceptance of the product. Chia seeds induced beneficial effects on sensory outcomes and enhanced physiochemical properties.

Keywords: mare milk, cow milk, feremnted milk, kefir, koumiss

Procedia PDF Downloads 87
3205 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations

Authors: Yehjune Heo

Abstract:

Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.

Keywords: anti-spoofing, CNN, fingerprint recognition, GAN

Procedia PDF Downloads 177
3204 Modelling Conceptual Quantities Using Support Vector Machines

Authors: Ka C. Lam, Oluwafunmibi S. Idowu

Abstract:

Uncertainty in cost is a major factor affecting performance of construction projects. To our knowledge, several conceptual cost models have been developed with varying degrees of accuracy. Incorporating conceptual quantities into conceptual cost models could improve the accuracy of early predesign cost estimates. Hence, the development of quantity models for estimating conceptual quantities of framed reinforced concrete structures using supervised machine learning is the aim of the current research. Using measured quantities of structural elements and design variables such as live loads and soil bearing pressures, response and predictor variables were defined and used for constructing conceptual quantities models. Twenty-four models were developed for comparison using a combination of non-parametric support vector regression, linear regression, and bootstrap resampling techniques. R programming language was used for data analysis and model implementation. Gross soil bearing pressure and gross floor loading were discovered to have a major influence on the quantities of concrete and reinforcement used for foundations. Building footprint and gross floor loading had a similar influence on beams and slabs. Future research could explore the modelling of other conceptual quantities for walls, finishes, and services using machine learning techniques. Estimation of conceptual quantities would assist construction planners in early resource planning and enable detailed performance evaluation of early cost predictions.

Keywords: bootstrapping, conceptual quantities, modelling, reinforced concrete, support vector regression

Procedia PDF Downloads 202
3203 Satellite LiDAR-Based Digital Terrain Model Correction using Gaussian Process Regression

Authors: Keisuke Takahata, Hiroshi Suetsugu

Abstract:

Forest height is an important parameter for forest biomass estimation, and precise elevation data is essential for accurate forest height estimation. There are several globally or nationally available digital elevation models (DEMs) like SRTM and ASTER. However, its accuracy is reported to be low particularly in mountainous areas where there are closed canopy or steep slope. Recently, space-borne LiDAR, such as the Global Ecosystem Dynamics Investigation (GEDI), have started to provide sparse but accurate ground elevation and canopy height estimates. Several studies have reported the high degree of accuracy in their elevation products on their exact footprints, while it is not clear how this sparse information can be used for wider area. In this study, we developed a digital terrain model correction algorithm by spatially interpolating the difference between existing DEMs and GEDI elevation products by using Gaussian Process (GP) regression model. The result shows that our GP-based methodology can reduce the mean bias of the elevation data from 3.7m to 0.3m when we use airborne LiDAR-derived elevation information as ground truth. Our algorithm is also capable of quantifying the elevation data uncertainty, which is critical requirement for biomass inventory. Upcoming satellite-LiDAR missions, like MOLI (Multi-footprint Observation Lidar and Imager), are expected to contribute to the more accurate digital terrain model generation.

Keywords: digital terrain model, satellite LiDAR, gaussian processes, uncertainty quantification

Procedia PDF Downloads 167
3202 Explanation and Temporality in International Relations

Authors: Alasdair Stanton

Abstract:

What makes for a good explanation? Twenty years after Wendt’s important treatment of constitution and causation, non-causal explanations (sometimes referred to as ‘understanding’, or ‘descriptive inference’) have become, if not mainstream, at least accepted within International Relations. This article proceeds in two parts: firstly, it examines closely Wendt’s constitutional claims, and while it agrees there is a difference between causal and constitutional, rejects the view that constitutional explanations lack temporality. In fact, this author concludes that a constitutional argument is only possible if it relies upon a more foundational, causal argument. Secondly, through theoretical analysis of the constitutional argument, this research seeks to delineate temporal and non-temporal ways of explaining within International Relations. This article concludes that while the constitutional explanation, like other logical arguments, including comparative, and counter-factual, are not truly non-causal explanations, they are not bound as tightly to the ‘real world’ as temporal arguments such as cause-effect, process tracing, or even interpretivist accounts. However, like mathematical models, non-temporal arguments should aim for empirical testability as well as internal consistency. This work aims to give clear theoretical grounding to those authors using non-temporal arguments, but also to encourage them, and their positivist critics, to engage in thoroughgoing empirical tests.

Keywords: causal explanation, constitutional understanding, empirical, temporality

Procedia PDF Downloads 187
3201 INRAM-3DCNN: Multi-Scale Convolutional Neural Network Based on Residual and Attention Module Combined with Multilayer Perceptron for Hyperspectral Image Classification

Authors: Jianhong Xiang, Rui Sun, Linyu Wang

Abstract:

In recent years, due to the continuous improvement of deep learning theory, Convolutional Neural Network (CNN) has played a great superior performance in the research of Hyperspectral Image (HSI) classification. Since HSI has rich spatial-spectral information, only utilizing a single dimensional or single size convolutional kernel will limit the detailed feature information received by CNN, which limits the classification accuracy of HSI. In this paper, we design a multi-scale CNN with MLP based on residual and attention modules (INRAM-3DCNN) for the HSI classification task. We propose to use multiple 3D convolutional kernels to extract the packet feature information and fully learn the spatial-spectral features of HSI while designing residual 3D convolutional branches to avoid the decline of classification accuracy due to network degradation. Secondly, we also design the 2D Inception module with a joint channel attention mechanism to quickly extract key spatial feature information at different scales of HSI and reduce the complexity of the 3D model. Due to the high parallel processing capability and nonlinear global action of the Multilayer Perceptron (MLP), we use it in combination with the previous CNN structure for the final classification process. The experimental results on two HSI datasets show that the proposed INRAM-3DCNN method has superior classification performance and can perform the classification task excellently.

Keywords: INRAM-3DCNN, residual, channel attention, hyperspectral image classification

Procedia PDF Downloads 70
3200 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments

Authors: David X. Dong, Qingming Zhang, Meng Lu

Abstract:

Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.

Keywords: optical sensor, regression model, nitrites, water quality

Procedia PDF Downloads 67
3199 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 46
3198 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics

Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo

Abstract:

Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.

Keywords: communication signal, feature extraction, Holder coefficient, improved cloud model

Procedia PDF Downloads 143
3197 Determination of Optimal Stress Locations in 2D–9 Noded Element in Finite Element Technique

Authors: Nishant Shrivastava, D. K. Sehgal

Abstract:

In Finite Element Technique nodal stresses are calculated through displacement as nodes. In this process, the displacement calculated at nodes is sufficiently good enough but stresses calculated at nodes are not sufficiently accurate. Therefore, the accuracy in the stress computation in FEM models based on the displacement technique is obviously matter of concern for computational time in shape optimization of engineering problems. In the present work same is focused to find out unique points within the element as well as the boundary of the element so, that good accuracy in stress computation can be achieved. Generally, major optimal stress points are located in domain of the element some points have been also located at boundary of the element where stresses are fairly accurate as compared to nodal values. Then, it is subsequently concluded that there is an existence of unique points within the element, where stresses have higher accuracy than other points in the elements. Therefore, it is main aim is to evolve a generalized procedure for the determination of the optimal stress location inside the element as well as at the boundaries of the element and verify the same with results from numerical experimentation. The results of quadratic 9 noded serendipity elements are presented and the location of distinct optimal stress points is determined inside the element, as well as at the boundaries. The theoretical results indicate various optimal stress locations are in local coordinates at origin and at a distance of 0.577 in both directions from origin. Also, at the boundaries optimal stress locations are at the midpoints of the element boundary and the locations are at a distance of 0.577 from the origin in both directions. The above findings were verified through experimentation and findings were authenticated. For numerical experimentation five engineering problems were identified and the numerical results of 9-noded element were compared to those obtained by using the same order of 25-noded quadratic Lagrangian elements, which are considered as standard. Then root mean square errors are plotted with respect to various locations within the elements as well as the boundaries and conclusions were drawn. After numerical verification it is noted that in a 9-noded element, origin and locations at a distance of 0.577 from origin in both directions are the best sampling points for the stresses. It was also noted that stresses calculated within line at boundary enclosed by 0.577 midpoints are also very good and the error found is very less. When sampling points move away from these points, then it causes line zone error to increase rapidly. Thus, it is established that there are unique points at boundary of element where stresses are accurate, which can be utilized in solving various engineering problems and are also useful in shape optimizations.

Keywords: finite elements, Lagrangian, optimal stress location, serendipity

Procedia PDF Downloads 101
3196 Performance Study of Classification Algorithms for Consumer Online Shopping Attitudes and Behavior Using Data Mining

Authors: Rana Alaa El-Deen Ahmed, M. Elemam Shehab, Shereen Morsy, Nermeen Mekawie

Abstract:

With the growing popularity and acceptance of e-commerce platforms, users face an ever increasing burden in actually choosing the right product from the large number of online offers. Thus, techniques for personalization and shopping guides are needed by users. For a pleasant and successful shopping experience, users need to know easily which products to buy with high confidence. Since selling a wide variety of products has become easier due to the popularity of online stores, online retailers are able to sell more products than a physical store. The disadvantage is that the customers might not find products they need. In this research the customer will be able to find the products he is searching for, because recommender systems are used in some ecommerce web sites. Recommender system learns from the information about customers and products and provides appropriate personalized recommendations to customers to find the needed product. In this paper eleven classification algorithms are comparatively tested to find the best classifier fit for consumer online shopping attitudes and behavior in the experimented dataset. The WEKA knowledge analysis tool, which is an open source data mining workbench software used in comparing conventional classifiers to get the best classifier was used in this research. In this research by using the data mining tool (WEKA) with the experimented classifiers the results show that decision table and filtered classifier gives the highest accuracy and the lowest accuracy classification via clustering and simple cart.

Keywords: classification, data mining, machine learning, online shopping, WEKA

Procedia PDF Downloads 343
3195 Choking among Infants and Young Children

Authors: Emad M.Abdullat, Hasan A. Ader-Rahman, Rayyan Al Ali, Arwa.A.Hudaib

Abstract:

This retrospective study aims to determine the epidemiological features of such deaths in one of the general teaching hospitals in Jordan with a focus on weaning practices and its relation to sucking as major factors underlying the mechanism of choking in infants and young children. The study utilized a retrospective design to review the records of forensic cases due to foreign body aspiration examined at the forensic department at the Jordan University Hospital. A total of 27 cases of choking in the pediatric age group were retrieved from the reports of the autopsy cases dissected. All cases of children who died due to chocking by foreign bodies were under 11 years old. Choking by food materials constituted (44.4%) of cases under 3 years of age while choking by non-food material were less prevalent under 3 years of age and comprising 18.5% of the cases. Health care personnel and parents need to be aware that introduction of solid food, unlike exclusive breast or formula-milk feeding, can have serious consequences if occurring in inappropriate timing or consistency during early childhood physical and functional development. Parents need to be educated regarding the appropriate timing and process of weaning.

Keywords: chocking, infants, weaning practices, young children

Procedia PDF Downloads 479
3194 Enabling Oral Communication and Accelerating Recovery: The Creation of a Novel Low-Cost Electroencephalography-Based Brain-Computer Interface for the Differently Abled

Authors: Rishabh Ambavanekar

Abstract:

Expressive Aphasia (EA) is an oral disability, common among stroke victims, in which the Broca’s area of the brain is damaged, interfering with verbal communication abilities. EA currently has no technological solutions and its only current viable solutions are inefficient or only available to the affluent. This prompts the need for an affordable, innovative solution to facilitate recovery and assist in speech generation. This project proposes a novel concept: using a wearable low-cost electroencephalography (EEG) device-based brain-computer interface (BCI) to translate a user’s inner dialogue into words. A low-cost EEG device was developed and found to be 10 to 100 times less expensive than any current EEG device on the market. As part of the BCI, a machine learning (ML) model was developed and trained using the EEG data. Two stages of testing were conducted to analyze the effectiveness of the device: a proof-of-concept and a final solution test. The proof-of-concept test demonstrated an average accuracy of above 90% and the final solution test demonstrated an average accuracy of above 75%. These two successful tests were used as a basis to demonstrate the viability of BCI research in developing lower-cost verbal communication devices. Additionally, the device proved to not only enable users to verbally communicate but has the potential to also assist in accelerated recovery from the disorder.

Keywords: neurotechnology, brain-computer interface, neuroscience, human-machine interface, BCI, HMI, aphasia, verbal disability, stroke, low-cost, machine learning, ML, image recognition, EEG, signal analysis

Procedia PDF Downloads 110