Search results for: software fault prediction
4672 Beam, Column Joints Concrete in Seismic Zone
Authors: Khalifa Kherafa
Abstract:
This east project consists in studying beam–column joints concrete subjected to seismic loads. A bibliographical study was introduced to clarify the work undertaken by the researchers in the field during the three last decades and especially the two last year’s results which were to study for the determination of the method of calculating of transverse reinforcement in the various nodes of a structure. For application, the efforts in the posts el the beams of a building in R+4 in zone 3 were calculate according to the finite element method through the softwareKeywords: beam–column joints, cyclic loading, shearing force, damaged joint
Procedia PDF Downloads 4304671 Fan Engagement Sustainability and Fan Fatigue: Understanding the Role of Marvel Franchise for Fans
Authors: Mitrajit Biswas
Abstract:
This paper is trying to understand the issues related to maintaining a fan base over a period of time. The paper would be trying to look into how the fan base can be actually engaged. That is what are the attributes of keeping a fan base interested and not feeling fatigued or tired. It would also try to understand that what are the key elements required for a franchise to be active and keep the fans engaged. The paper would look to understand the primary elements of a franchise like Marvel to keep the fans engaged for such a long period of time. This will help to improve the scope of literature on consumer engagement and consumption behaviour in modern times of unpredictability. It will also help to understand how the consumers take in a longer period of engagement. This would help to understand that despite huge success and investment in fan engagement and what could be the possible reasons for disengagement? This would include in-depth interviews with a global sample of around 50 people, which would be connected through purposive, convenient, and snowball sampling. It will help to understand whether the customer lifetime value as a theory can be sustained based on customer relationship management. If yes, how can products from certain companies predict and keep up the strategy for the prediction of the consumer engagement process?Keywords: consumption, fatigue, brand loyalty, sustainable consumption
Procedia PDF Downloads 794670 Research and Application of the Three-Dimensional Visualization Geological Modeling of Mine
Authors: Bin Wang, Yong Xu, Honggang Qu, Rongmei Liu, Zhenji Gao
Abstract:
Today's mining industry is advancing gradually toward digital and visual direction. The three dimensional visualization geological modeling of mine is the digital characterization of mineral deposit, and is one of the key technology of digital mine. The three-dimensional geological modeling is a technology that combines the geological spatial information management, geological interpretation, geological spatial analysis and prediction, geostatistical analysis, entity content analysis and graphic visualization in three-dimensional environment with computer technology, and is used in geological analysis. In this paper, the three-dimensional geological modeling of an iron mine through the use of Surpac is constructed, and the weight difference of the estimation methods between distance power inverse ratio method and ordinary kriging is studied, and the ore body volume and reserves are simulated and calculated by using these two methods. Compared with the actual mine reserves, its result is relatively accurate, so it provided scientific bases for mine resource assessment, reserve calculation, mining design and so on.Keywords: three-dimensional geological modeling, geological database, geostatistics, block model
Procedia PDF Downloads 744669 Predicting Oil Spills in Real-Time: A Machine Learning and AIS Data-Driven Approach
Authors: Tanmay Bisen, Aastha Shayla, Susham Biswas
Abstract:
Oil spills from tankers can cause significant harm to the environment and local communities, as well as have economic consequences. Early predictions of oil spills can help to minimize these impacts. Our proposed system uses machine learning and neural networks to predict potential oil spills by monitoring data from ship Automatic Identification Systems (AIS). The model analyzes ship movements, speeds, and changes in direction to identify patterns that deviate from the norm and could indicate a potential spill. Our approach not only identifies anomalies but also predicts spills before they occur, providing early detection and mitigation measures. This can prevent or minimize damage to the reputation of the company responsible and the country where the spill takes place. The model's performance on the MV Wakashio oil spill provides insight into its ability to detect and respond to real-world oil spills, highlighting areas for improvement and further research.Keywords: Anomaly Detection, Oil Spill Prediction, Machine Learning, Image Processing, Graph Neural Network (GNN)
Procedia PDF Downloads 784668 Numerical Prediction of Bearing Strength on Composite Bolted Joint Using Three Dimensional Puck Failure Criteria
Authors: M. S. Meon, M. N. Rao, K-U. Schröder
Abstract:
Mechanical fasteners especially bolting is commonly used in joining carbon-fiber reinforced polymer (CFRP) composite structures due to their good joinability and easy for maintenance characteristics. Since this approach involves with notching, a proper progressive damage model (PDM) need to be implemented and verified to capture existence of damages in the structure. A three dimensional (3D) failure criteria of Puck is established to predict the ultimate bearing failure of such joint. The failure criteria incorporated with degradation scheme are coded based on user subroutine executed in Abaqus. Single lap joint (SLJ) of composite bolted joint is used as target configuration. The results revealed that the PDM adopted here could sufficiently predict the behaviour of composite bolted joint up to ultimate bearing failure. In addition, mesh refinement near holes increased the accuracy of predicted strength as well as computational effort.Keywords: bearing strength, bolted joint, degradation scheme, progressive damage model
Procedia PDF Downloads 5064667 Performance Complexity Measurement of Tightening Equipment Based on Kolmogorov Entropy
Authors: Guoliang Fan, Aiping Li, Xuemei Liu, Liyun Xu
Abstract:
The performance of the tightening equipment will decline with the working process in manufacturing system. The main manifestations are the randomness and discretization degree increasing of the tightening performance. To evaluate the degradation tendency of the tightening performance accurately, a complexity measurement approach based on Kolmogorov entropy is presented. At first, the states of performance index are divided for calibrating the discrete degree. Then the complexity measurement model based on Kolmogorov entropy is built. The model describes the performance degradation tendency of tightening equipment quantitatively. At last, a study case is applied for verifying the efficiency and validity of the approach. The research achievement shows that the presented complexity measurement can effectively evaluate the degradation tendency of the tightening equipment. It can provide theoretical basis for preventive maintenance and life prediction of equipment.Keywords: complexity measurement, Kolmogorov entropy, manufacturing system, performance evaluation, tightening equipment
Procedia PDF Downloads 2644666 Digitalization in Aggregate Quarries
Authors: José Eugenio Ortiz, Pierre Plaza, Josefa Herrero, Iván Cabria, José Luis Blanco, Javier Gavilanes, José Ignacio Escavy, Ignacio López-Cilla, Virginia Yagüe, César Pérez, Silvia Rodríguez, Jorge Rico, Cecilia Serrano, Jesús Bernat
Abstract:
The development of Artificial Intelligence services in mining processes, specifically in aggregate quarries, is facilitating automation and improving numerous aspects of operations. Ultimately, AI is transforming the mining industry by improving efficiency, safety and sustainability. With the ability to analyze large amounts of data and make autonomous decisions, AI offers great opportunities to optimize mining operations and maximize the economic and social benefits of this vital industry. Within the framework of the European DIGIECOQUARRY project, various services were developed for the identification of material quality, production estimation, detection of anomalies and prediction of consumption and production automatically with good results.Keywords: aggregates, artificial intelligence, automatization, mining operations
Procedia PDF Downloads 924665 Paleogene Syn-Rift Play Identification in Palembang Sub-Basin, South Sumatera, Indonesia
Authors: Perdana Rakhmana Putra, Hansen Wijaya, Sri Budiyani, Muhamad Natsir, Alexis badai Samudra
Abstract:
The Palembang Sub-Basin (PSB) located in southern part of South Sumatera basin (SSB) consist of half-graben complex trending N-S to NW-SE. These geometries are believe as an impact of strike-slip regime developed in Eocene-Oligocene. Generally, most of the wells in this area produced hydrocarbon from late stage of syn-rift sequences called Lower Talang Akar (LTAF) and post-rift sequences called Batu Raja Formation (BRF) and drilled to proved hydrocarbon on structural trap; three-way dip anticline, four-way dip anticline, dissected anticline, and stratigraphy trap; carbonate build-up and stratigraphic pinch out. Only a few wells reached the deeper syn-rift sequences called Lahat Formation (LAF) and Lemat Formation (LEF). The new interpretation of subsurface data was done by the tectonostratigraphy concept and focusing on syn-rift sequence. Base on seismic characteristic on basin centre, it divided into four sequences: pre-rift sequence, rift initiation, maximum rift and late rift. These sequences believed as a new exploration target on PSB mature basin. This paper will demonstrate the paleo depositional setting during Paleogene and exploration play concept of syn-rift sequence in PSB. The main play for this area consists of stratigraphic and structure play, where the stratigraphic play is Eocene-Oligocene sediment consist of LAF sandstone, LEF-Benakat formation, and LAF with pinch-out geometry. The pinch-out, lenses geometry and on-lap features can be seen on the seismic reflector and formed at the time of the syn-rift sequence. The structural play is dominated by a 3 Way Dip play related to reverse fault trap.Keywords: syn-rift, tectono-stratigraphy, exploration play, basin center play, south sumatera basin
Procedia PDF Downloads 2054664 On Differential Growth Equation to Stochastic Growth Model Using Hyperbolic Sine Function in Height/Diameter Modeling of Pines
Authors: S. O. Oyamakin, A. U. Chukwu
Abstract:
Richard's growth equation being a generalized logistic growth equation was improved upon by introducing an allometric parameter using the hyperbolic sine function. The integral solution to this was called hyperbolic Richard's growth model having transformed the solution from deterministic to a stochastic growth model. Its ability in model prediction was compared with the classical Richard's growth model an approach which mimicked the natural variability of heights/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using the coefficient of determination (R2), Mean Absolute Error (MAE) and Mean Square Error (MSE) results. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the behavior of the error term for possible violations. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic Richard's nonlinear growth models better than the classical Richard's growth model.Keywords: height, Dbh, forest, Pinus caribaea, hyperbolic, Richard's, stochastic
Procedia PDF Downloads 4834663 Study on the Forging of AISI 1015 Spiral Bevel Gear by Finite Element Analysis
Authors: T. S. Yang, J. H. Liang
Abstract:
This study applies the finite element method (FEM) to predict maximum forging load, effective stress distribution, effective strain distribution, workpiece temperature temperature in spiral bevel gear forging of AISI 1015. Maximum forging load, effective stress, effective strain, workpiece temperature are determined for different process parameters, such as modules, number of teeth, helical angle and workpiece temperature of the spiral bevel gear hot forging, using the FEM. Finally, the prediction of the power requirement for the spiral bevel gear hot forging of AISI 1015 is determined.Keywords: spiral bevel gear, hot forging, finite element method
Procedia PDF Downloads 4814662 Far-Field Acoustic Prediction of a Supersonic Expanding Jet Using Large Eddy Simulation
Authors: Jesus Ruano, Asensi Oliva
Abstract:
The hydrodynamic field generated by a jet expansion is computed via three dimensional compressible Large Eddy Simulation (LES). Finite Volume Method (FVM) will be the discretization used during this simulation as well as hybrid schemes based on Kinetic Energy Preserving (KEP) schemes and up-winding Godunov based schemes with instabilities detectors. Velocity and pressure fields will be stored at different surfaces near the jet, but far enough to enclose all the fluctuations, in order to use them as input for the acoustic solver. The acoustic field is obtained in the far-field region at several locations by means of a hybrid method based on Ffowcs-Williams and Hawkings (FWH) equation. This equation will be formulated in the spectral domain, via Fourier Transform of the acoustic sources, which are modeled from the results of the initial simulation. The obtained results will allow the study of the broadband noise generated as well as sound directivities.Keywords: far-field noise, Ffowcs-Williams and Hawkings, finite volume method, large eddy simulation, jet noise
Procedia PDF Downloads 3024661 Fatigue of Multiscale Nanoreinforced Composites: 3D Modelling
Authors: Leon Mishnaevsky Jr., Gaoming Dai
Abstract:
3D numerical simulations of fatigue damage of multiscale fiber reinforced polymer composites with secondary nanoclay reinforcement are carried out. Macro-micro FE models of the multiscale composites are generated automatically using Python based software. The effect of the nanoclay reinforcement (localized in the fiber/matrix interface (fiber sizing) and distributed throughout the matrix) on the crack path, damage mechanisms and fatigue behavior is investigated in numerical experiments.Keywords: computational mechanics, fatigue, nanocomposites, composites
Procedia PDF Downloads 6104660 Discovering Word-Class Deficits in Persons with Aphasia
Authors: Yashaswini Channabasavegowda, Hema Nagaraj
Abstract:
Aim: The current study aims at discovering word-class deficits concerning the noun-verb ratio in confrontation naming, picture description, and picture-word matching tasks. A total of ten persons with aphasia (PWA) and ten age-matched neurotypical individuals (NTI) were recruited for the study. The research includes both behavioural and objective measures to assess the word class deficits in PWA. Objective: The main objective of the research is to identify word class deficits seen in persons with aphasia, using various speech eliciting tasks. Method: The study was conducted in the L1 of the participants, considered to be Kannada. Action naming test and Boston naming test adapted to the Kannada version are administered to the participants; also, a picture description task is carried out. Picture-word matching task was carried out using e-prime software (version 2) to measure the accuracy and reaction time with respect to identification verbs and nouns. The stimulus was presented through auditory and visual modes. Data were analysed to identify errors noticed in the naming of nouns versus verbs, with respect to the Boston naming test and action naming test and also usage of nouns and verbs in the picture description task. Reaction time and accuracy for picture-word matching were extracted from the software. Results: PWA showed a significant difference in sentence structure compared to age-matched NTI. Also, PWA showed impairment in syntactic measures in the picture description task, with fewer correct grammatical sentences and fewer correct usage of verbs and nouns, and they produced a greater proportion of nouns compared to verbs. PWA had poorer accuracy and lesser reaction time in the picture-word matching task compared to NTI, and accuracy was higher for nouns compared to verbs in PWA. The deficits were noticed irrespective of the cause leading to aphasia.Keywords: nouns, verbs, aphasia, naming, description
Procedia PDF Downloads 1064659 AI for Efficient Geothermal Exploration and Utilization
Authors: Velimir Monty Vesselinov, Trais Kliplhuis, Hope Jasperson
Abstract:
Artificial intelligence (AI) is a powerful tool in the geothermal energy sector, aiding in both exploration and utilization. Identifying promising geothermal sites can be challenging due to limited surface indicators and the need for expensive drilling to confirm subsurface resources. Geothermal reservoirs can be located deep underground and exhibit complex geological structures, making traditional exploration methods time-consuming and imprecise. AI algorithms can analyze vast datasets of geological, geophysical, and remote sensing data, including satellite imagery, seismic surveys, geochemistry, geology, etc. Machine learning algorithms can identify subtle patterns and relationships within this data, potentially revealing hidden geothermal potential in areas previously overlooked. To address these challenges, a SIML (Science-Informed Machine Learning) technology has been developed. SIML methods are different from traditional ML techniques. In both cases, the ML models are trained to predict the spatial distribution of an output (e.g., pressure, temperature, heat flux) based on a series of inputs (e.g., permeability, porosity, etc.). The traditional ML (a) relies on deep and wide neural networks (NNs) based on simple algebraic mappings to represent complex processes. In contrast, the SIML neurons incorporate complex mappings (including constitutive relationships and physics/chemistry models). This results in ML models that have a physical meaning and satisfy physics laws and constraints. The prototype of the developed software, called GeoTGO, is accessible through the cloud. Our software prototype demonstrates how different data sources can be made available for processing, executed demonstrative SIML analyses, and presents the results in a table and graphic form.Keywords: science-informed machine learning, artificial inteligence, exploration, utilization, hidden geothermal
Procedia PDF Downloads 614658 Passive Neutralization of Acid Mine Drainage Using Locally Produced Limestone
Authors: Reneiloe Seodigeng, Malwandla Hanabe, Haleden Chiririwa, Hilary Rutto, Tumisang Seodigeng
Abstract:
Neutralisation of acid-mine drainage (AMD) using limestone is cost effective, and good results can be obtained. However, this process has its limitations; it cannot be used for highly acidic water which consists of Fe(III). When Fe(III) reacts with CaCO3, it results in armoring. Armoring slows the reaction, and additional alkalinity can no longer be generated. Limestone is easily accessible, so this problem can be easily dealt with. Experiments were carried out to evaluate the effect of PVC pipe length on ferric and ferrous ions. It was found that the shorter the pipe length the more these dissolved metals precipitate. The effect of the pipe length on the hydrogen ions was also studied, and it was found that these two have an inverse relationship. Experimental data were further compared with the model prediction data to see if they behave in a similar fashion. The model was able to predict the behaviour of 1.5m and 2 m pipes in ferric and ferrous ion precipitation.Keywords: acid mine drainage, neutralisation, limestone, mathematical modelling
Procedia PDF Downloads 3694657 Development of a Decision-Making Method by Using Machine Learning Algorithms in the Early Stage of School Building Design
Authors: Rajaian Hoonejani Mohammad, Eshraghi Pegah, Zomorodian Zahra Sadat, Tahsildoost Mohammad
Abstract:
Over the past decade, energy consumption in educational buildings has steadily increased. The purpose of this research is to provide a method to quickly predict the energy consumption of buildings using separate evaluation of zones and decomposing the building to eliminate the complexity of geometry at the early design stage. To produce this framework, machine learning algorithms such as Support vector regression (SVR) and Artificial neural network (ANN) are used to predict energy consumption and thermal comfort metrics in a school as a case. The database consists of more than 55000 samples in three climates of Iran. Cross-validation evaluation and unseen data have been used for validation. In a specific label, cooling energy, it can be said the accuracy of prediction is at least 84% and 89% in SVR and ANN, respectively. The results show that the SVR performed much better than the ANN.Keywords: early stage of design, energy, thermal comfort, validation, machine learning
Procedia PDF Downloads 774656 BFDD-S: Big Data Framework to Detect and Mitigate DDoS Attack in SDN Network
Authors: Amirreza Fazely Hamedani, Muzzamil Aziz, Philipp Wieder, Ramin Yahyapour
Abstract:
Software-defined networking in recent years came into the sight of so many network designers as a successor to the traditional networking. Unlike traditional networks where control and data planes engage together within a single device in the network infrastructure such as switches and routers, the two planes are kept separated in software-defined networks (SDNs). All critical decisions about packet routing are made on the network controller, and the data level devices forward the packets based on these decisions. This type of network is vulnerable to DDoS attacks, degrading the overall functioning and performance of the network by continuously injecting the fake flows into it. This increases substantial burden on the controller side, and the result ultimately leads to the inaccessibility of the controller and the lack of network service to the legitimate users. Thus, the protection of this novel network architecture against denial of service attacks is essential. In the world of cybersecurity, attacks and new threats emerge every day. It is essential to have tools capable of managing and analyzing all this new information to detect possible attacks in real-time. These tools should provide a comprehensive solution to automatically detect, predict and prevent abnormalities in the network. Big data encompasses a wide range of studies, but it mainly refers to the massive amounts of structured and unstructured data that organizations deal with on a regular basis. On the other hand, it regards not only the volume of the data; but also that how data-driven information can be used to enhance decision-making processes, security, and the overall efficiency of a business. This paper presents an intelligent big data framework as a solution to handle illegitimate traffic burden on the SDN network created by the numerous DDoS attacks. The framework entails an efficient defence and monitoring mechanism against DDoS attacks by employing the state of the art machine learning techniques.Keywords: apache spark, apache kafka, big data, DDoS attack, machine learning, SDN network
Procedia PDF Downloads 1724655 Hardware Implementation for the Contact Force Reconstruction in Tactile Sensor Arrays
Authors: María-Luisa Pinto-Salamanca, Wilson-Javier Pérez-Holguín
Abstract:
Reconstruction of contact forces is a fundamental technique for analyzing the properties of a touched object and is essential for regulating the grip force in slip control loops. This is based on the processing of the distribution, intensity, and direction of the forces during the capture of the sensors. Currently, efficient hardware alternatives have been used more frequently in different fields of application, allowing the implementation of computationally complex algorithms, as is the case with tactile signal processing. The use of hardware for smart tactile sensing systems is a research area that promises to improve the processing time and portability requirements of applications such as artificial skin and robotics, among others. The literature review shows that hardware implementations are present today in almost all stages of smart tactile detection systems except in the force reconstruction process, a stage in which they have been less applied. This work presents a hardware implementation of a model-driven reported in the literature for the contact force reconstruction of flat and rigid tactile sensor arrays from normal stress data. From the analysis of a software implementation of such a model, this implementation proposes the parallelization of tasks that facilitate the execution of matrix operations and a two-dimensional optimization function to obtain a vector force by each taxel in the array. This work seeks to take advantage of the parallel hardware characteristics of Field Programmable Gate Arrays, FPGAs, and the possibility of applying appropriate techniques for algorithms parallelization using as a guide the rules of generalization, efficiency, and scalability in the tactile decoding process and considering the low latency, low power consumption, and real-time execution as the main parameters of design. The results show a maximum estimation error of 32% in the tangential forces and 22% in the normal forces with respect to the simulation by the Finite Element Modeling (FEM) technique of Hertzian and non-Hertzian contact events, over sensor arrays of 10×10 taxels of different sizes. The hardware implementation was carried out on an MPSoC XCZU9EG-2FFVB1156 platform of Xilinx® that allows the reconstruction of force vectors following a scalable approach, from the information captured by means of tactile sensor arrays composed of up to 48 × 48 taxels that use various transduction technologies. The proposed implementation demonstrates a reduction in estimation time of x / 180 compared to software implementations. Despite the relatively high values of the estimation errors, the information provided by this implementation on the tangential and normal tractions and the triaxial reconstruction of forces allows to adequately reconstruct the tactile properties of the touched object, which are similar to those obtained in the software implementation and in the two FEM simulations taken as reference. Although errors could be reduced, the proposed implementation is useful for decoding contact forces for portable tactile sensing systems, thus helping to expand electronic skin applications in robotic and biomedical contexts.Keywords: contact forces reconstruction, forces estimation, tactile sensor array, hardware implementation
Procedia PDF Downloads 2004654 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 2414653 Internal Corrosion Rupture of a 6-in Gas Line Pipe
Authors: Fadwa Jewilli
Abstract:
A sudden leak of a 6-inch gas line pipe after being in service for one year was observed. The pipe had been designed to transport dry gas. The failure had taken place in 6 o’clock position at the stage discharge of the flow process. Laboratory investigations were conducted to find out the cause of the pipe rupture. Visual and metallographic observations confirmed that the pipe split was due to a crack initiated in circumferential and then turned into longitudinal direction. Sever wall thickness reduction was noticed on the internal pipe surface. Scanning electron microscopy observations at the fracture surface revealed features of ductile fracture mode. Corrosion product analysis showed the traces of iron carbonate and iron sulphate. The laboratory analysis resulted in the conclusion that the pipe failed due to the effect of wet fluid (condensate) caused severe wall thickness dissolution resulted in pipe could not stand the continuation at in-service working condition.Keywords: gas line pipe, corrosion prediction ductile fracture, ductile fracture, failure analysis
Procedia PDF Downloads 874652 The Relationship between Iranian EFL Learners' Multiple Intelligences and Their Performance on Grammar Tests
Authors: Rose Shayeghi, Pejman Hosseinioun
Abstract:
The Multiple Intelligences theory characterizes human intelligence as a multifaceted entity that exists in all human beings with varying degrees. The most important contribution of this theory to the field of English Language Teaching (ELT) is its role in identifying individual differences and designing more learner-centered programs. The present study aims at investigating the relationship between different elements of multiple intelligence and grammar scores. To this end, 63 female Iranian EFL learner selected from among intermediate students participated in the study. The instruments employed were a Nelson English language test, Michigan Grammar Test, and Teele Inventory for Multiple Intelligences (TIMI). The results of Pearson Product-Moment Correlation revealed a significant positive correlation between grammatical accuracy and linguistic as well as interpersonal intelligence. The results of Stepwise Multiple Regression indicated that linguistic intelligence contributed to the prediction of grammatical accuracy.Keywords: multiple intelligence, grammar, ELT, EFL, TIMI
Procedia PDF Downloads 4944651 CRYPTO COPYCAT: A Fashion Centric Blockchain Framework for Eliminating Fashion Infringement
Authors: Magdi Elmessiry, Adel Elmessiry
Abstract:
The fashion industry represents a significant portion of the global gross domestic product, however, it is plagued by cheap imitators that infringe on the trademarks which destroys the fashion industry's hard work and investment. While eventually the copycats would be found and stopped, the damage has already been done, sales are missed and direct and indirect jobs are lost. The infringer thrives on two main facts: the time it takes to discover them and the lack of tracking technologies that can help the consumer distinguish them. Blockchain technology is a new emerging technology that provides a distributed encrypted immutable and fault resistant ledger. Blockchain presents a ripe technology to resolve the infringement epidemic facing the fashion industry. The significance of the study is that a new approach leveraging the state of the art blockchain technology coupled with artificial intelligence is used to create a framework addressing the fashion infringement problem. It transforms the current focus on legal enforcement, which is difficult at best, to consumer awareness that is far more effective. The framework, Crypto CopyCat, creates an immutable digital asset representing the actual product to empower the customer with a near real time query system. This combination emphasizes the consumer's awareness and appreciation of the product's authenticity, while provides real time feedback to the producer regarding the fake replicas. The main findings of this study are that implementing this approach can delay the fake product penetration of the original product market, thus allowing the original product the time to take advantage of the market. The shift in the fake adoption results in reduced returns, which impedes the copycat market and moves the emphasis to the original product innovation.Keywords: fashion, infringement, blockchain, artificial intelligence, textiles supply chain
Procedia PDF Downloads 2634650 TransDrift: Modeling Word-Embedding Drift Using Transformer
Authors: Nishtha Madaan, Prateek Chaudhury, Nishant Kumar, Srikanta Bedathur
Abstract:
In modern NLP applications, word embeddings are a crucial backbone that can be readily shared across a number of tasks. However, as the text distributions change and word semantics evolve over time, the downstream applications using the embeddings can suffer if the word representations do not conform to the data drift. Thus, maintaining word embeddings to be consistent with the underlying data distribution is a key problem. In this work, we tackle this problem and propose TransDrift, a transformer-based prediction model for word embeddings. Leveraging the flexibility of the transformer, our model accurately learns the dynamics of the embedding drift and predicts future embedding. In experiments, we compare with existing methods and show that our model makes significantly more accurate predictions of the word embedding than the baselines. Crucially, by applying the predicted embeddings as a backbone for downstream classification tasks, we show that our embeddings lead to superior performance compared to the previous methods.Keywords: NLP applications, transformers, Word2vec, drift, word embeddings
Procedia PDF Downloads 964649 Heart Ailment Prediction Using Machine Learning Methods
Authors: Abhigyan Hedau, Priya Shelke, Riddhi Mirajkar, Shreyash Chaple, Mrunali Gadekar, Himanshu Akula
Abstract:
The heart is the coordinating centre of the major endocrine glandular structure of the body, which produces hormones that profoundly affect the operations of the body, and diagnosing cardiovascular disease is a difficult but critical task. By extracting knowledge and information about the disease from patient data, data mining is a more practical technique to help doctors detect disorders. We use a variety of machine learning methods here, including logistic regression and support vector classifiers (SVC), K-nearest neighbours Classifiers (KNN), Decision Tree Classifiers, Random Forest classifiers and Gradient Boosting classifiers. These algorithms are applied to patient data containing 13 different factors to build a system that predicts heart disease in less time with more accuracy.Keywords: logistic regression, support vector classifier, k-nearest neighbour, decision tree, random forest and gradient boosting
Procedia PDF Downloads 574648 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar
Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo
Abstract:
The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB
Procedia PDF Downloads 914647 Random Variation of Treated Volumes in Fractionated 2D Image Based HDR Brachytherapy for Cervical Cancer
Authors: R. Tudugala, B. M. A. I. Balasooriya, W. M. Ediri Arachchi, R. W. M. W. K. Rathnayake, T. D. Premaratna
Abstract:
Brachytherapy involves placing a source of radiation near the cancer site which gives promising prognosis for cervical cancer treatments. The purpose of this study was to evaluate the effect of random variation of treated volumes in between fractions in the 2D image based fractionated high dose rate brachytherapy for cervical cancer at National Cancer Institute Maharagama, Sri Lanka. Dose plans were analyzed for 150 cervical cancer patients with orthogonal radiographs (2D) based brachytherapy. ICRU treated volumes was modeled by translating the applicators with the help of “Multisource HDR plus software”. The difference of treated volumes with respect to the applicator geometry was analyzed by using SPSS 18 software; to derived patient population based estimates of delivered treated volumes relative to ideally treated volumes. Packing was evaluated according to bladder dose, rectum dose and geometry of the dose distribution by three consultant radiation oncologist. The difference of treated volumes depends on types of the applicators, which was used in fractionated brachytherapy. The means of the “Difference of Treated Volume” (DTV) for “Evenly activated tandem (ET)” length” group was ((X_1)) -0.48 cm3 and ((X_2)) 11.85 cm3 for “Unevenly activated tandem length (UET) group. The range of the DTV for ET group was 35.80 cm3 whereas UET group 104.80 cm3. One sample T test was performed to compare the DTV with “Ideal treatment volume difference (0.00cm3)”. It is evident that P value was 0.732 for ET group and for UET it was 0.00 moreover independent two sample T test was performed to compare ET and UET groups and calculated P value was 0.005. Packing was evaluated under three categories 59.38% used “Convenient Packing Technique”, 33.33% used “Fairly Packing Technique” and 7.29% used “Not Convenient Packing” in their fractionated brachytherapy treatments. Random variation of treated volume in ET group is much lower than UET group and there is a significant difference (p<0.05) in between ET and UET groups which affects the dose distribution of the treatment. Furthermore, it can be concluded nearly 92.71% patient’s packing were used acceptable packing technique at NCIM, Sri Lanka.Keywords: brachytherapy, cervical cancer, high dose rate, tandem, treated volumes
Procedia PDF Downloads 2044646 Relations of Progression in Cognitive Decline with Initial EEG Resting-State Functional Network in Mild Cognitive Impairment
Authors: Chia-Feng Lu, Yuh-Jen Wang, Yu-Te Wu, Sui-Hing Yan
Abstract:
This study aimed at investigating whether the functional brain networks constructed using the initial EEG (obtained when patients first visited hospital) can be correlated with the progression of cognitive decline calculated as the changes of mini-mental state examination (MMSE) scores between the latest and initial examinations. We integrated the time–frequency cross mutual information (TFCMI) method to estimate the EEG functional connectivity between cortical regions, and the network analysis based on graph theory to investigate the organization of functional networks in aMCI. Our finding suggested that higher integrated functional network with sufficient connection strengths, dense connection between local regions, and high network efficiency in processing information at the initial stage may result in a better prognosis of the subsequent cognitive functions for aMCI. In conclusion, the functional connectivity can be a useful biomarker to assist in prediction of cognitive declines in aMCI.Keywords: cognitive decline, functional connectivity, MCI, MMSE
Procedia PDF Downloads 3884645 Evaluation of Particle Settling in Flow Chamber
Authors: Abdulrahman Alenezi, B. Stefan
Abstract:
Abstract— The investigation of fluids containing particles or filaments includes a category of complex fluids and is vital in both theory and application. The forecast of particle behaviors plays a significant role in the existing technology as well as future technology. This paper focuses on the prediction of the particle behavior through the investigation of the particle disentrainment from a pipe on a horizontal air stream. This allows for examining the influence of the particle physical properties on its behavior when falling on horizontal air stream. This investigation was conducted on a device located at the University of Greenwich's Medway Campus. Two materials were selected to carry out this study: Salt and Glass Beads particles. The shape of the Slat particles is cubic where the shape of the Glass Beads is almost spherical. The outcome from the experimental work were presented in terms of distance travelled by the particles according to their diameters as After that, the particles sizes were measured using Laser Diffraction device and used to determine the drag coefficient and the settling velocity.Keywords: flow experiment, drag coefficient, Particle Settling, Flow Chamber
Procedia PDF Downloads 1444644 The Digital Transformation of Life Insurance Sales in Iran With the Emergence of Personal Financial Planning Robots; Opportunities and Challenges
Authors: Pedram Saadati, Zahra Nazari
Abstract:
Anticipating and identifying future opportunities and challenges facing industry activists for the emergence and entry of new knowledge and technologies of personal financial planning, and providing practical solutions is one of the goals of this research. For this purpose, a future research tool based on receiving opinions from the main players of the insurance industry has been used. The research method in this study was in 4 stages; including 1- a survey of the specialist salesforce of life insurance in order to identify the variables 2- the ranking of the variables by experts selected by a researcher-made questionnaire 3- holding a panel of experts with the aim of understanding the mutual effects of the variables and 4- statistical analyzes of the mutual effects matrix in Mick Mac software is done. The integrated analysis of influencing variables in the future has been done with the method of Structural Analysis, which is one of the efficient and innovative methods of future research. A list of opportunities and challenges was identified through a survey of best-selling life insurance representatives who were selected by snowball sampling. In order to prioritize and identify the most important issues, all the issues raised were sent to selected experts who were selected theoretically through a researcher-made questionnaire. The respondents determined the importance of 36 variables through scoring, so that the prioritization of opportunity and challenge variables can be determined. 8 of the variables identified in the first stage were removed by selected experts, and finally, the number of variables that could be examined in the third stage became 28 variables, which, in order to facilitate the examination, were divided into 6 categories, respectively, 11 variables of organization and management. Marketing and sales 7 cases, social and cultural 6 cases, technological 2 cases, rebranding 1 case and insurance 1 case were divided. The reliability of the researcher-made questionnaire was confirmed with the Cronbach's alpha test value of 0.96. In the third stage, by forming a panel consisting of 5 insurance industry experts, the consensus of their opinions about the influence of factors on each other and the ranking of variables was entered into the matrix. The matrix included the interrelationships of 28 variables, which were investigated using the structural analysis method. By analyzing the data obtained from the matrix by Mic Mac software, the findings of the research indicate that the categories of "correct training in the use of the software, the weakness of the technology of insurance companies in personalizing products, using the approach of equipping the customer, and honesty in declaring no need Customer to Insurance", the most important challenges of the influencer and the categories of "salesforce equipping approach, product personalization based on customer needs assessment, customer's pleasant experience of being consulted with consulting robots, business improvement of the insurance company due to the use of these tools, increasing the efficiency of the issuance process and optimal customer purchase" were identified as the most important opportunities for influence.Keywords: personal financial planning, wealth management, advisor robots, life insurance, digital transformation
Procedia PDF Downloads 524643 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments
Authors: David X. Dong, Qingming Zhang, Meng Lu
Abstract:
Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.Keywords: optical sensor, regression model, nitrites, water quality
Procedia PDF Downloads 76