Search results for: machine repair
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3281

Search results for: machine repair

1991 Seismic Performance of Isolated Bridge Configurations with Soil Structure Interaction

Authors: Davide Forcellini

Abstract:

The most recent development of earthquake engineering is based on concept of design consisting in prescribed performance rather than the more traditional prescriptive approaches. The paper aims to assess the effects of isolation devices and soil structure interaction on a benchmark bridge adopting a Performance-Based Earthquake Engineering methodology. Several isolated configurations of abutments and pier connections are compared performing the most representative isolation devices. Isolation systems suitability depends on many factors, mainly connected with ground effects. In this regard, the second purpose of this paper is to assess the effects of soil-structure interaction (SSI) on the studied bridge configurations. Contributions of isolation technique and soil structure interaction are assessed evaluating the resistance effects applied to Peak Ground Acceleration (PGA) levels in terms of cost and time repair quantities.

Keywords: base isolation, bridge, earthquake engineering, non linearity, PBEE methodology, seismic assessment, soil structure interaction

Procedia PDF Downloads 419
1990 Estimation of the Exergy-Aggregated Value Generated by a Manufacturing Process Using the Theory of the Exergetic Cost

Authors: German Osma, Gabriel Ordonez

Abstract:

The production of metal-rubber spares for vehicles is a sequential process that consists in the transformation of raw material through cutting activities and chemical and thermal treatments, which demand electricity and fossil fuels. The energy efficiency analysis for these cases is mostly focused on studying of each machine or production step, but is not common to study of the quality of the production process achieves from aggregated value viewpoint, which can be used as a quality measurement for determining of impact on the environment. In this paper, the theory of exergetic cost is used for determining of aggregated exergy to three metal-rubber spares, from an exergy analysis and thermoeconomic analysis. The manufacturing processing of these spares is based into batch production technique, and therefore is proposed the use of this theory for discontinuous flows from of single models of workstations; subsequently, the complete exergy model of each product is built using flowcharts. These models are a representation of exergy flows between components into the machines according to electrical, mechanical and/or thermal expressions; they determine the demanded exergy to produce the effective transformation in raw materials (aggregated exergy value), the exergy losses caused by equipment and irreversibilities. The energy resources of manufacturing process are electricity and natural gas. The workstations considered are lathes, punching presses, cutters, zinc machine, chemical treatment tanks, hydraulic vulcanizing presses and rubber mixer. The thermoeconomic analysis was done by workstation and by spare; first of them describes the operation of the components of each machine and where the exergy losses are; while the second of them estimates the exergy-aggregated value for finished product and wasted feedstock. Results indicate that exergy efficiency of a mechanical workstation is between 10% and 60% while this value in the thermal workstations is less than 5%; also that each effective exergy-aggregated value is one-thirtieth of total exergy required for operation of manufacturing process, which amounts approximately to 2 MJ. These troubles are caused mainly by technical limitations of machines, oversizing of metal feedstock that demands more mechanical transformation work, and low thermal insulation of chemical treatment tanks and hydraulic vulcanizing presses. From established information, in this case, it is possible to appreciate the usefulness of theory of exergetic cost for analyzing of aggregated value in manufacturing processes.

Keywords: exergy-aggregated value, exergy efficiency, thermoeconomics, exergy modeling

Procedia PDF Downloads 160
1989 Sustainable Rehabilation of Ancient Structure

Authors: Ram Narayan Khare, Aradhna Shrivastava, Adhyatma Khare

Abstract:

This paper focuses on the damage that has been occurred in the Ancient structures due to various factors such as rainfall, climate, insects, lifespan and also most important lack of technologies in the era of its construction. The structure is of lime surkhi masonry and is made a century ago. It has crossed its durability but is of historical importance for the area, that is the reason why it needs utmost importance for its Rehabilitation. The paper deals with the damage that has been occurred in the structure and how to repair and renovate the same keeping in mind that the material deviation could not take place because it shows how in ancient era structures are made of. The building has used lime surkhi mortar along with wood apple as fibrous material for providing adhesiveness in masonry binding. The paper helps in sustainable retrofitting of the structure without changing the integrity of the structure. This helps in maintaining the originality of structure in present era and also help in providing information to the upcoming generation how ancient civil construction has been carried out that withstand even more than a century.

Keywords: Lime Surkhi masonry, rehabilitation, sustainable development, historical building

Procedia PDF Downloads 11
1988 Hazard Alert in Malaysia Related to Occupational Safety and Health

Authors: Atikah Binti Azudin, Nurin Nazlah Binti Muhamad Yani, Nur Alya Nadhirah Binti Naaidith, Nur Amylia Wahida Binti Mat Ayob, Nurshamimi Shakirah Binti Suboh, Nur Auni Batrisyia Binti Md. Zaini, Nur Aziemah Binti Mohamad, Nurul Suffiyah Binti Sa’Dun, Sabrina Sasha Izzati Binti Zubaile, Umi Huwaina Binti Ahmiruddin, Wan Nur Shafawati Binti Wan Ghazali

Abstract:

A hazard alert is intended to provide brief information about significant incidents or existing difficulties in Department workplaces. The alert gives guidelines for proper processes, practices, and controls to be applied. When operated in accordance with the manufacturer's instructions, any machine or tool utilized at work provides a safe and dependable platform for workers to accomplish job duties. However, when not utilized appropriately, the machine might pose a major hazard to employees. Employers have a duty to keep employees safe in this scenario. This Hazard Alert outlines specific occupational dangers and the controls that employers must apply to prevent injury or fatal accidents. There have been several cases of hazard alerts in Malaysia, which have had a negative impact on a few workers. Looking on the bright side, we can overcome every incident in a variety of ways. One of these is that only qualified individuals operate mobile machinery and equipment. In addition, employees may also perform frequent pre-use inspections of machinery to discover and fix flaws. Hazard alert is very important, and this study would cover a variety of subjects, including the methods employed.

Keywords: safe, hazard, impacts, duties.

Procedia PDF Downloads 80
1987 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 277
1986 Steady-State Behavior of a Multi-Phase M/M/1 Queue in Random Evolution Subject to Catastrophe Failure

Authors: Reni M. Sagayaraj, Anand Gnana S. Selvam, Reynald R. Susainathan

Abstract:

In this paper, we consider stochastic queueing models for Steady-state behavior of a multi-phase M/M/1 queue in random evolution subject to catastrophe failure. The arrival flow of customers is described by a marked Markovian arrival process. The service times of different type customers have a phase-type distribution with different parameters. To facilitate the investigation of the system we use a generalized phase-type service time distribution. This model contains a repair state, when a catastrophe occurs the system is transferred to the failure state. The paper focuses on the steady-state equation, and observes that, the steady-state behavior of the underlying queueing model along with the average queue size is analyzed.

Keywords: M/G/1 queuing system, multi-phase, random evolution, steady-state equation, catastrophe failure

Procedia PDF Downloads 317
1985 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 228
1984 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 169
1983 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities

Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun

Abstract:

As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.

Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning

Procedia PDF Downloads 42
1982 Applying Artificial Neural Networks to Predict Speed Skater Impact Concussion Risk

Authors: Yilin Liao, Hewen Li, Paula McConvey

Abstract:

Speed skaters often face a risk of concussion when they fall on the ice floor and impact crash mats during practices and competitive races. Several variables, including those related to the skater, the crash mat, and the impact position (body side/head/feet impact), are believed to influence the severity of the skater's concussion. While computer simulation modeling can be employed to analyze these accidents, the simulation process is time-consuming and does not provide rapid information for coaches and teams to assess the skater's injury risk in competitive events. This research paper promotes the exploration of the feasibility of using AI techniques for evaluating skater’s potential concussion severity, and to develop a fast concussion prediction tool using artificial neural networks to reduce the risk of treatment delays for injured skaters. The primary data is collected through virtual tests and physical experiments designed to simulate skater-mat impact. It is then analyzed to identify patterns and correlations; finally, it is used to train and fine-tune the artificial neural networks for accurate prediction. The development of the prediction tool by employing machine learning strategies contributes to the application of AI methods in sports science and has theoretical involvements for using AI techniques in predicting and preventing sports-related injuries.

Keywords: artificial neural networks, concussion, machine learning, impact, speed skater

Procedia PDF Downloads 85
1981 Melanoma and Non-Melanoma, Skin Lesion Classification, Using a Deep Learning Model

Authors: Shaira L. Kee, Michael Aaron G. Sy, Myles Joshua T. Tan, Hezerul Abdul Karim, Nouar AlDahoul

Abstract:

Skin diseases are considered the fourth most common disease, with melanoma and non-melanoma skin cancer as the most common type of cancer in Caucasians. The alarming increase in Skin Cancer cases shows an urgent need for further research to improve diagnostic methods, as early diagnosis can significantly improve the 5-year survival rate. Machine Learning algorithms for image pattern analysis in diagnosing skin lesions can dramatically increase the accuracy rate of detection and decrease possible human errors. Several studies have shown the diagnostic performance of computer algorithms outperformed dermatologists. However, existing methods still need improvements to reduce diagnostic errors and generate efficient and accurate results. Our paper proposes an ensemble method to classify dermoscopic images into benign and malignant skin lesions. The experiments were conducted using the International Skin Imaging Collaboration (ISIC) image samples. The dataset contains 3,297 dermoscopic images with benign and malignant categories. The results show improvement in performance with an accuracy of 88% and an F1 score of 87%, outperforming other existing models such as support vector machine (SVM), Residual network (ResNet50), EfficientNetB0, EfficientNetB4, and VGG16.

Keywords: deep learning - VGG16 - efficientNet - CNN – ensemble – dermoscopic images - melanoma

Procedia PDF Downloads 72
1980 Noise Measurement and Awareness at Construction Site: A Case Study

Authors: Feiruz Ab'lah, Zarini Ismail, Mohamad Zaki Hassan, Siti Nadia Mohd Bakhori, Mohamad Azlan Suhot, Mohd Yusof Md. Daud, Shamsul Sarip

Abstract:

The construction industry is one of the major sectors in Malaysia. Apart from providing facilities, services, and goods it also offers employment opportunities to local and foreign workers. In fact, the construction workers are exposed to a hazardous level of noises that generated from various sources including excavators, bulldozers, concrete mixer, and piling machines. Previous studies indicated that the piling and concrete work was recorded as the main source that contributed to the highest level of noise among the others. Therefore, the aim of this study is to obtain the noise exposure during piling process and to determine the awareness of workers against noise pollution at the construction site. Initially, the reading of noise was obtained at construction site by using a digital sound level meter (SLM), and noise exposure to the workers was mapped. Readings were taken from four different distances; 5, 10, 15 and 20 meters from the piling machine. Furthermore, a set of questionnaire was also distributed to assess the knowledge regarding noise pollution at the construction site. The result showed that the mean noise level at 5m distance was more than 90 dB which exceeded the recommended level. Although the level of awareness regarding the effect of noise pollution is satisfactory, majority of workers (90%) still did not wear ear protecting device during work period. Therefore, the safety module guidelines related to noise pollution controls should be implemented to provide a safe working environment and prevent initial occupational hearing loss.

Keywords: construction, noise awareness, noise pollution, piling machine

Procedia PDF Downloads 367
1979 Improved Classification Procedure for Imbalanced and Overlapped Situations

Authors: Hankyu Lee, Seoung Bum Kim

Abstract:

The issue with imbalance and overlapping in the class distribution becomes important in various applications of data mining. The imbalanced dataset is a special case in classification problems in which the number of observations of one class (i.e., major class) heavily exceeds the number of observations of the other class (i.e., minor class). Overlapped dataset is the case where many observations are shared together between the two classes. Imbalanced and overlapped data can be frequently found in many real examples including fraud and abuse patients in healthcare, quality prediction in manufacturing, text classification, oil spill detection, remote sensing, and so on. The class imbalance and overlap problem is the challenging issue because this situation degrades the performance of most of the standard classification algorithms. In this study, we propose a classification procedure that can effectively handle imbalanced and overlapped datasets by splitting data space into three parts: nonoverlapping, light overlapping, and severe overlapping and applying the classification algorithm in each part. These three parts were determined based on the Hausdorff distance and the margin of the modified support vector machine. An experiments study was conducted to examine the properties of the proposed method and compared it with other classification algorithms. The results showed that the proposed method outperformed the competitors under various imbalanced and overlapped situations. Moreover, the applicability of the proposed method was demonstrated through the experiment with real data.

Keywords: classification, imbalanced data with class overlap, split data space, support vector machine

Procedia PDF Downloads 299
1978 Smart Disassembly of Waste Printed Circuit Boards: The Role of IoT and Edge Computing

Authors: Muhammad Mohsin, Fawad Ahmad, Fatima Batool, Muhammad Kaab Zarrar

Abstract:

The integration of the Internet of Things (IoT) and edge computing devices offers a transformative approach to electronic waste management, particularly in the dismantling of printed circuit boards (PCBs). This paper explores how these technologies optimize operational efficiency and improve environmental sustainability by addressing challenges such as data security, interoperability, scalability, and real-time data processing. Proposed solutions include advanced machine learning algorithms for predictive maintenance, robust encryption protocols, and scalable architectures that incorporate edge computing. Case studies from leading e-waste management facilities illustrate benefits such as improved material recovery efficiency, reduced environmental impact, improved worker safety, and optimized resource utilization. The findings highlight the potential of IoT and edge computing to revolutionize e-waste dismantling and make the case for a collaborative approach between policymakers, waste management professionals, and technology developers. This research provides important insights into the use of IoT and edge computing to make significant progress in the sustainable management of electronic waste

Keywords: internet of Things, edge computing, waste PCB disassembly, electronic waste management, data security, interoperability, machine learning, predictive maintenance, sustainable development

Procedia PDF Downloads 12
1977 Architectural Knowledge Systems Related to Use of Terracotta in Bengal

Authors: Nandini Mukhopadhyay

Abstract:

The prominence of terracotta as a building material in Bengal is well justified by its geographical location. The architectural knowledge system associated with terracotta can be comprehended in the typology of the built structures as they act as texts to interpret the knowledge. The history of Bengal has witnessed the influence of several rulers in developing the architectural vocabulary of the region. This metamorphosis of the architectural knowledge systems in the region includes the Bhakti movement, the Islamic influence, and the British rule, which led to the evolution of the use of terracotta from decorative elements to structural elements in the present-day context. This paper intends to develop an understanding of terracotta as a building material, its use in a built structure, the common problems associated with terracotta construction, and the techniques of maintenance, repair, and conservation. This paper also explores the size, shape, and geometry of the material and its varied use in temples, mosques in the region. It also takes into note that the use of terracotta was concentrated majorly to religious structures and not in the settlements of the common people. And the architectural style of temples and mosques of Bengal is hugely influenced by the houses of the common.

Keywords: terracotta, material, knowledge system, conservation

Procedia PDF Downloads 137
1976 A Machine Learning Approach for Earthquake Prediction in Various Zones Based on Solar Activity

Authors: Viacheslav Shkuratskyy, Aminu Bello Usman, Michael O’Dea, Saifur Rahman Sabuj

Abstract:

This paper examines relationships between solar activity and earthquakes; it applied machine learning techniques: K-nearest neighbour, support vector regression, random forest regression, and long short-term memory network. Data from the SILSO World Data Center, the NOAA National Center, the GOES satellite, NASA OMNIWeb, and the United States Geological Survey were used for the experiment. The 23rd and 24th solar cycles, daily sunspot number, solar wind velocity, proton density, and proton temperature were all included in the dataset. The study also examined sunspots, solar wind, and solar flares, which all reflect solar activity and earthquake frequency distribution by magnitude and depth. The findings showed that the long short-term memory network model predicts earthquakes more correctly than the other models applied in the study, and solar activity is more likely to affect earthquakes of lower magnitude and shallow depth than earthquakes of magnitude 5.5 or larger with intermediate depth and deep depth.

Keywords: k-nearest neighbour, support vector regression, random forest regression, long short-term memory network, earthquakes, solar activity, sunspot number, solar wind, solar flares

Procedia PDF Downloads 63
1975 Reconstructability Analysis for Landslide Prediction

Authors: David Percy

Abstract:

Landslides are a geologic phenomenon that affects a large number of inhabited places and are constantly being monitored and studied for the prediction of future occurrences. Reconstructability analysis (RA) is a methodology for extracting informative models from large volumes of data that work exclusively with discrete data. While RA has been used in medical applications and social science extensively, we are introducing it to the spatial sciences through applications like landslide prediction. Since RA works exclusively with discrete data, such as soil classification or bedrock type, working with continuous data, such as porosity, requires that these data are binned for inclusion in the model. RA constructs models of the data which pick out the most informative elements, independent variables (IVs), from each layer that predict the dependent variable (DV), landslide occurrence. Each layer included in the model retains its classification data as a primary encoding of the data. Unlike other machine learning algorithms that force the data into one-hot encoding type of schemes, RA works directly with the data as it is encoded, with the exception of continuous data, which must be binned. The usual physical and derived layers are included in the model, and testing our results against other published methodologies, such as neural networks, yields accuracy that is similar but with the advantage of a completely transparent model. The results of an RA session with a data set are a report on every combination of variables and their probability of landslide events occurring. In this way, every combination of informative state combinations can be examined.

Keywords: reconstructability analysis, machine learning, landslides, raster analysis

Procedia PDF Downloads 49
1974 Computation of ΔV Requirements for Space Debris Removal Using Orbital Transfer

Authors: Sadhvi Gupta, Charulatha S.

Abstract:

Since the dawn of the early 1950s humans have launched numerous vehicles in space. Be it from rockets to rovers humans have done tremendous growth in the technology sector. While there is mostly upside for it for humans the only major downside which cannot be ignored now is the amount of junk produced in space due to it i.e. space debris. All this space junk amounts from objects we launch from earth which so remains in orbit until it re-enters the atmosphere. Space debris can be of various sizes mainly the big ones are of the dead satellites floating in space and small ones can consist of various things like paint flecks, screwdrivers, bolts etc. Tracking of small space debris whose size is less than 10 cm is impossible and can have vast implications. As the amount of space debris increases in space the chances of it hitting a functional satellite also increases. And it is extremely costly to repair or recover the satellite once hit by a revolving space debris. So the proposed solution is, Actively removing space debris while keeping space sustainability in mind. For this solution a total of 8 modules will be launched in LEO and in GEO and these models will be placed in their desired orbits through Hohmann transfer and for that calculating ΔV values is crucial. After which the modules will be placed in their designated positions in STK software and thorough analysis is conducted.

Keywords: space debris, Hohmann transfer, STK, delta-V

Procedia PDF Downloads 77
1973 A Hierarchical Method for Multi-Class Probabilistic Classification Vector Machines

Authors: P. Byrnes, F. A. DiazDelaO

Abstract:

The Support Vector Machine (SVM) has become widely recognised as one of the leading algorithms in machine learning for both regression and binary classification. It expresses predictions in terms of a linear combination of kernel functions, referred to as support vectors. Despite its popularity amongst practitioners, SVM has some limitations, with the most significant being the generation of point prediction as opposed to predictive distributions. Stemming from this issue, a probabilistic model namely, Probabilistic Classification Vector Machines (PCVM), has been proposed which respects the original functional form of SVM whilst also providing a predictive distribution. As physical system designs become more complex, an increasing number of classification tasks involving industrial applications consist of more than two classes. Consequently, this research proposes a framework which allows for the extension of PCVM to a multi class setting. Additionally, the original PCVM framework relies on the use of type II maximum likelihood to provide estimates for both the kernel hyperparameters and model evidence. In a high dimensional multi class setting, however, this approach has been shown to be ineffective due to bad scaling as the number of classes increases. Accordingly, we propose the application of Markov Chain Monte Carlo (MCMC) based methods to provide a posterior distribution over both parameters and hyperparameters. The proposed framework will be validated against current multi class classifiers through synthetic and real life implementations.

Keywords: probabilistic classification vector machines, multi class classification, MCMC, support vector machines

Procedia PDF Downloads 216
1972 Optimal Maintenance Policy for a Three-Unit System

Authors: A. Abbou, V. Makis, N. Salari

Abstract:

We study the condition-based maintenance (CBM) problem of a system subject to stochastic deterioration. The system is composed of three units (or modules): (i) Module 1 deterioration follows a Markov process with two operational states and one failure state. The operational states are partially observable through periodic condition monitoring. (ii) Module 2 deterioration follows a Gamma process with a known failure threshold. The deterioration level of this module is fully observable through periodic inspections. (iii) Only the operating age information is available of Module 3. The lifetime of this module has a general distribution. A CBM policy prescribes when to initiate a maintenance intervention and which modules to repair during intervention. Our objective is to determine the optimal CBM policy minimizing the long-run expected average cost of operating the system. This is achieved by formulating a Markov decision process (MDP) and developing the value iteration algorithm for solving the MDP. We provide numerical examples illustrating the cost-effectiveness of the optimal CBM policy through a comparison with heuristic policies commonly found in the literature.

Keywords: reliability, maintenance optimization, Markov decision process, heuristics

Procedia PDF Downloads 207
1971 The Artificial Intelligence Driven Social Work

Authors: Avi Shrivastava

Abstract:

Our world continues to grapple with a lot of social issues. Economic growth and scientific advancements have not completely eradicated poverty, homelessness, discrimination and bias, gender inequality, health issues, mental illness, addiction, and other social issues. So, how do we improve the human condition in a world driven by advanced technology? The answer is simple: we will have to leverage technology to address some of the most important social challenges of the day. AI, or artificial intelligence, has emerged as a critical tool in the battle against issues that deprive marginalized and disadvantaged groups of the right to enjoy benefits that a society offers. Social work professionals can transform their lives by harnessing it. The lack of reliable data is one of the reasons why a lot of social work projects fail. Social work professionals continue to rely on expensive and time-consuming primary data collection methods, such as observation, surveys, questionnaires, and interviews, instead of tapping into AI-based technology to generate useful, real-time data and necessary insights. By leveraging AI’s data-mining ability, we can gain a deeper understanding of how to solve complex social problems and change lives of people. We can do the right work for the right people and at the right time. For example, AI can enable social work professionals to focus their humanitarian efforts on some of the world’s poorest regions, where there is extreme poverty. An interdisciplinary team of Stanford scientists, Marshall Burke, Stefano Ermon, David Lobell, Michael Xie, and Neal Jean, used AI to spot global poverty zones – identifying such zones is a key step in the fight against poverty. The scientists combined daytime and nighttime satellite imagery with machine learning algorithms to predict poverty in Nigeria, Uganda, Tanzania, Rwanda, and Malawi. In an article published by Stanford News, Stanford researchers use dark of night and machine learning, Ermon explained that they provided the machine-learning system, an application of AI, with the high-resolution satellite images and asked it to predict poverty in the African region. “The system essentially learned how to solve the problem by comparing those two sets of images [daytime and nighttime].” This is one example of how AI can be used by social work professionals to reach regions that need their aid the most. It can also help identify sources of inequality and conflict, which could reduce inequalities, according to Nature’s study, titled The role of artificial intelligence in achieving the Sustainable Development Goals, published in 2020. The report also notes that AI can help achieve 79 percent of the United Nation’s (UN) Sustainable Development Goals (SDG). AI is impacting our everyday lives in multiple amazing ways, yet some people do not know much about it. If someone is not familiar with this technology, they may be reluctant to use it to solve social issues. So, before we talk more about the use of AI to accomplish social work objectives, let’s put the spotlight on how AI and social work can complement each other.

Keywords: social work, artificial intelligence, AI based social work, machine learning, technology

Procedia PDF Downloads 92
1970 Studies on Performance of an Airfoil and Its Simulation

Authors: Rajendra Roul

Abstract:

The main objective of the project is to bring attention towards the performance of an aerofoil when exposed to the fluid medium inside the wind tunnel. This project aims at involvement of civil as well as mechanical engineering thereby making itself as a multidisciplinary project. The airfoil of desired size is taken into consideration for the project to carry out effectively. An aerofoil is the shape of the wing or blade of propeller, rotor or turbine. Lot of experiment have been carried out through wind-tunnel keeping aerofoil as a reference object to make a future forecast regarding the design of turbine blade, car and aircraft. Lift and drag now become the major identification factor for any design industry which shows that wind tunnel testing along with software analysis (ANSYS) becomes the mandatory task for any researchers to forecast an aerodynamics design. This project is an initiative towards the mitigation of drag, better lift and analysis of wake surface profile by investigating the surface pressure distribution. The readings has been taken on airfoil model in Wind Tunnel Testing Machine (WTTM) at different air velocity 20m/sec, 25m/sec, 30m/sec and different angle of attack 00,50,100,150,200. Air velocity and pressures are measured in several ways in wind tunnel testing machine by use to measuring instruments like Anemometer and Multi tube manometer. Moreover to make the analysis more accurate Ansys fluent contribution become substantial and subsequently the CFD simulation results. Analysis on an Aerofoil have a wide spectrum of application other than aerodynamics including wind loads in the design of buildings and bridges for structural engineers.

Keywords: wind-tunnel, aerofoil, Ansys, multitube manometer

Procedia PDF Downloads 402
1969 Improving Activity Recognition Classification of Repetitious Beginner Swimming Using a 2-Step Peak/Valley Segmentation Method with Smoothing and Resampling for Machine Learning

Authors: Larry Powell, Seth Polsley, Drew Casey, Tracy Hammond

Abstract:

Human activity recognition (HAR) systems have shown positive performance when recognizing repetitive activities like walking, running, and sleeping. Water-based activities are a reasonably new area for activity recognition. However, water-based activity recognition has largely focused on supporting the elite and competitive swimming population, which already has amazing coordination and proper form. Beginner swimmers are not perfect, and activity recognition needs to support the individual motions to help beginners. Activity recognition algorithms are traditionally built around short segments of timed sensor data. Using a time window input can cause performance issues in the machine learning model. The window’s size can be too small or large, requiring careful tuning and precise data segmentation. In this work, we present a method that uses a time window as the initial segmentation, then separates the data based on the change in the sensor value. Our system uses a multi-phase segmentation method that pulls all peaks and valleys for each axis of an accelerometer placed on the swimmer’s lower back. This results in high recognition performance using leave-one-subject-out validation on our study with 20 beginner swimmers, with our model optimized from our final dataset resulting in an F-Score of 0.95.

Keywords: time window, peak/valley segmentation, feature extraction, beginner swimming, activity recognition

Procedia PDF Downloads 109
1968 MIMIC: A Multi Input Micro-Influencers Classifier

Authors: Simone Leonardi, Luca Ardito

Abstract:

Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.

Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media

Procedia PDF Downloads 166
1967 Design of Quality Assessment System for On-Orbit 3D Printing Based on 3D Reconstruction Technology

Authors: Jianning Tang, Trevor Hocksun Kwan, Xiaofeng Wu

Abstract:

With the increasing demand for space use in multiple sectors (navigation, telecommunication, imagery, etc.), the deployment and maintenance demand of satellites are growing. Considering the high launching cost and the restrictions on weight and size of the payload when using launch vehicle, the technique of on-orbit manufacturing has obtained more attention because of its significant potential to support future space missions. 3D printing is the most promising manufacturing technology that could be applied in space. However, due to the lack of autonomous quality assessment, the operation of conventional 3D printers still relies on human presence to supervise the printing process. This paper is proposed to develop an automatic 3D reconstruction system aiming at detecting failures on the 3D printed objects through application of point cloud technology. Based on the data obtained from the point cloud, the 3D printer could locate the failure and repair the failure. The system will increase automation and provide 3D printing with more feasibilities for space use without human interference.

Keywords: 3D printing, quality assessment, point cloud, on-orbit manufacturing

Procedia PDF Downloads 108
1966 Feature Based Unsupervised Intrusion Detection

Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein

Abstract:

The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.

Keywords: information gain (IG), intrusion detection system (IDS), k-means clustering, Weka

Procedia PDF Downloads 285
1965 An Investigation into Computer Vision Methods to Identify Material Other Than Grapes in Harvested Wine Grape Loads

Authors: Riaan Kleyn

Abstract:

Mass wine production companies across the globe are provided with grapes from winegrowers that predominantly utilize mechanical harvesting machines to harvest wine grapes. Mechanical harvesting accelerates the rate at which grapes are harvested, allowing grapes to be delivered faster to meet the demands of wine cellars. The disadvantage of the mechanical harvesting method is the inclusion of material-other-than-grapes (MOG) in the harvested wine grape loads arriving at the cellar which degrades the quality of wine that can be produced. Currently, wine cellars do not have a method to determine the amount of MOG present within wine grape loads. This paper seeks to find an optimal computer vision method capable of detecting the amount of MOG within a wine grape load. A MOG detection method will encourage winegrowers to deliver MOG-free wine grape loads to avoid penalties which will indirectly enhance the quality of the wine to be produced. Traditional image segmentation methods were compared to deep learning segmentation methods based on images of wine grape loads that were captured at a wine cellar. The Mask R-CNN model with a ResNet-50 convolutional neural network backbone emerged as the optimal method for this study to determine the amount of MOG in an image of a wine grape load. Furthermore, a statistical analysis was conducted to determine how the MOG on the surface of a grape load relates to the mass of MOG within the corresponding grape load.

Keywords: computer vision, wine grapes, machine learning, machine harvested grapes

Procedia PDF Downloads 76
1964 The Ability of Adipose Derived Mesenchymal Stem Cells for Diabetes Mellitus Type 2 Treatment

Authors: Purwati, Sony Wibisono, Ari Sutjahjo, Askandar T. J., Fedik A. Rantam

Abstract:

Diabetes mellitus type 2 (T2DM), also known as hyperglycemia, results from insulin resistance and relative insulin deficiency. Diabetes mellitus is the main cause of premature death, particularly among individuals under the age of 70 years old. Mesenchymal stem cells (MSCs) can release bioactive molecules that promote tissue repair and regeneration. Hence, in this research, we evaluated the potential of autologous adipose-derived mesenchymal stem cells (AD-MSCs) in 40 patients of phase I clinical trial in T2DM with various ages between 30-79 years. AD-MSCs are transferred through catheterization. MSCs were validated by measures of CD105+ and CD34- expression. The result showed that after AD-MSCs transplantation, blood glucose levels (fasting and 2-hour postprandial) and insulin levels were significantly decreasing. Besides that, the level of HbA1c significantly decreased after three months of AD-MSCs injection and increasing level of c-peptide after injection. Thus, we conclude that AD-MSCs injection has the potential for T2DM therapy.

Keywords: glucose, hyperglycemia, MSCs, T2DM

Procedia PDF Downloads 68
1963 A Static Android Malware Detection Based on Actual Used Permissions Combination and API Calls

Authors: Xiaoqing Wang, Junfeng Wang, Xiaolan Zhu

Abstract:

Android operating system has been recognized by most application developers because of its good open-source and compatibility, which enriches the categories of applications greatly. However, it has become the target of malware attackers due to the lack of strict security supervision mechanisms, which leads to the rapid growth of malware, thus bringing serious safety hazards to users. Therefore, it is critical to detect Android malware effectively. Generally, the permissions declared in the AndroidManifest.xml can reflect the function and behavior of the application to a large extent. Since current Android system has not any restrictions to the number of permissions that an application can request, developers tend to apply more than actually needed permissions in order to ensure the successful running of the application, which results in the abuse of permissions. However, some traditional detection methods only consider the requested permissions and ignore whether it is actually used, which leads to incorrect identification of some malwares. Therefore, a machine learning detection method based on the actually used permissions combination and API calls was put forward in this paper. Meanwhile, several experiments are conducted to evaluate our methodology. The result shows that it can detect unknown malware effectively with higher true positive rate and accuracy while maintaining a low false positive rate. Consequently, the AdaboostM1 (J48) classification algorithm based on information gain feature selection algorithm has the best detection result, which can achieve an accuracy of 99.8%, a true positive rate of 99.6% and a lowest false positive rate of 0.

Keywords: android, API Calls, machine learning, permissions combination

Procedia PDF Downloads 320
1962 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques

Authors: Tomas Trainys, Algimantas Venckauskas

Abstract:

Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.

Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.

Procedia PDF Downloads 137