Search results for: machine modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4562

Search results for: machine modelling

422 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling

Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal

Abstract:

Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.

Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining

Procedia PDF Downloads 172
421 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 191
420 Increasing Recoverable Oil in Northern Afghanistan Kashkari Oil Field by Low-Salinity Water Flooding

Authors: Zabihullah Mahdi, Khwaja Naweed Seddiqi

Abstract:

Afghanistan is located in a tectonically complex and dynamic area, surrounded by rocks that originated on the mother continent of Gondwanaland. The northern Afghanistan basin, which runs along the country's northern border, has the potential for petroleum generation and accumulation. The Amu Darya basin has the largest petroleum potential in the region. Sedimentation occurred in the Amu Darya basin from the Jurassic to the Eocene epochs. Kashkari oil field is located in northern Afghanistan's Amu Darya basin. The field structure consists of a narrow northeast-southwest (NE-SW) anticline with two structural highs, the northwest limb being mild and the southeast limb being steep. The first oil production well in the Kashkari oil field was drilled in 1976, and a total of ten wells were drilled in the area between 1976 and 1979. The amount of original oil in place (OOIP) in the Kashkari oil field, based on the results of surveys and calculations conducted by research institutions, is estimated to be around 140 MMbbls. The objective of this study is to increase recoverable oil reserves in the Kashkari oil field through the implementation of low-salinity water flooding (LSWF) enhanced oil recovery (EOR) technique. The LSWF involved conducting a core flooding laboratory test consisting of four sequential steps with varying salinities. The test commenced with the use of formation water (FW) as the initial salinity, which was subsequently reduced to a salinity level of 0.1%. Afterward, the numerical simulation model of core scale oil recovery by LSWF was designed by Computer Modelling Group’s General Equation Modeler (CMG-GEM) software to evaluate the applicability of the technology to the field scale. Next, the Kahskari oil field simulation model was designed, and the LSWF method was applied to it. To obtain reasonable results, laboratory settings (temperature, pressure, rock, and oil characteristics) are designed as far as possible based on the condition of the Kashkari oil field, and several injection and production patterns are investigated. The relative permeability of oil and water in this study was obtained using Corey’s equation. In the Kashkari oilfield simulation model, three models: 1. Base model (with no water injection), 2. FW injection model, and 3. The LSW injection model was considered for the evaluation of the LSWF effect on oil recovery. Based on the results of the LSWF laboratory experiment and computer simulation analysis, the oil recovery increased rapidly after the FW was injected into the core. Subsequently, by injecting 1% salinity water, a gradual increase of 4% oil can be observed. About 6.4% of the field is produced by the application of the LSWF technique. The results of LSWF (salinity 0.1%) on the Kashkari oil field suggest that this technology can be a successful method for developing Kashkari oil production.

Keywords: low-salinity water flooding, immiscible displacement, Kashkari oil field, two-phase flow, numerical reservoir simulation model

Procedia PDF Downloads 39
419 Potential Impacts of Climate Change on Hydrological Droughts in the Limpopo River Basin

Authors: Nokwethaba Makhanya, Babatunde J. Abiodun, Piotr Wolski

Abstract:

Climate change possibly intensifies hydrological droughts and reduces water availability in river basins. Despite this, most research on climate change effects in southern Africa has focused exclusively on meteorological droughts. This thesis projects the potential impact of climate change on the future characteristics of hydrological droughts in the Limpopo River Basin (LRB). The study uses regional climate model (RCM) measurements (from the Coordinated Regional Climate Downscaling Experiment, CORDEX) and a combination of hydrological simulations (using the Soil and Water Assessment Tool Plus model, SWAT+) to predict the impacts at four global warming levels (GWLs: 1.5℃, 2.0℃, 2.5℃, and 3.0℃) under the RCP8.5 future climate scenario. The SWAT+ model was calibrated and validated with a streamflow dataset observed over the basin, and the sensitivity of model parameters was investigated. The performance of the SWAT+LRB model was verified using the Nash-Sutcliffe efficiency (NSE), Percent Bias (PBIAS), Root Mean Square Error (RMSE), and coefficient of determination (R²). The Standardized Precipitation Evapotranspiration Index (SPEI) and the Standardized Precipitation Index (SPI) have been used to detect meteorological droughts. The Soil Water Index (SSI) has been used to define agricultural drought, while the Water Yield Drought Index (WYLDI), the Surface Run-off Index (SRI), and the Streamflow Index (SFI) have been used to characterise hydrological drought. The performance of the SWAT+ model simulations over LRB is sensitive to the parameters CN2 (initial SCS runoff curve number for moisture condition II) and ESCO (soil evaporation compensation factor). The best simulation generally performed better during the calibration period than the validation period. In calibration and validation periods, NSE is ≤ 0.8, while PBIAS is ≥ ﹣80.3%, RMSE ≥ 11.2 m³/s, and R² ≤ 0.9. The simulations project a future increase in temperature and potential evapotranspiration over the basin, but they do not project a significant future trend in precipitation and hydrological variables. However, the spatial distribution of precipitation reveals a projected increase in precipitation in the southern part of the basin and a decline in the northern part of the basin, with the region of reduced precipitation projected to increase with GWLs. A decrease in all hydrological variables is projected over most parts of the basin, especially over the eastern part of the basin. The simulations predict meteorological droughts (i.e., SPEI and SPI), agricultural droughts (i.e., SSI), and hydrological droughts (i.e., WYLDI, SRI) would become more intense and severe across the basin. SPEI-drought has a greater magnitude of increase than SPI-drought, and agricultural and hydrological droughts have a magnitude of increase between the two. As a result, this research suggests that future hydrological droughts over the LRB could be more severe than the SPI-drought projection predicts but less severe than the SPEI-drought projection. This research can be used to mitigate the effects of potential climate change on basin hydrological drought.

Keywords: climate change, CORDEX, drought, hydrological modelling, Limpopo River Basin

Procedia PDF Downloads 128
418 Continuous and Discontinuos Modeling of Wellbore Instability in Anisotropic Rocks

Authors: C. Deangeli, P. Obentaku Obenebot, O. Omwanghe

Abstract:

The study focuses on the analysis of wellbore instability in rock masses affected by weakness planes. The occurrence of failure in such a type of rocks can occur in the rock matrix and/ or along the weakness planes, in relation to the mud weight gradient. In this case the simple Kirsch solution coupled with a failure criterion cannot supply a suitable scenario for borehole instabilities. Two different numerical approaches have been used in order to investigate the onset of local failure at the wall of a borehole. For each type of approach the influence of the inclination of weakness planes has been investigates, by considering joint sets at 0°, 35° and 90° to the horizontal. The first set of models have been carried out with FLAC 2D (Fast Lagrangian Analysis of Continua) by considering the rock material as a continuous medium, with a Mohr Coulomb criterion for the rock matrix and using the ubiquitous joint model for accounting for the presence of the weakness planes. In this model yield may occur in either the solid or along the weak plane, or both, depending on the stress state, the orientation of the weak plane and the material properties of the solid and weak plane. The second set of models have been performed with PFC2D (Particle Flow code). This code is based on the Discrete Element Method and considers the rock material as an assembly of grains bonded by cement-like materials, and pore spaces. The presence of weakness planes is simulated by the degradation of the bonds between grains along given directions. In general the results of the two approaches are in agreement. However the discrete approach seems to capture more complex phenomena related to local failure in the form of grain detachment at wall of the borehole. In fact the presence of weakness planes in the discontinuous medium leads to local instability along the weak planes also in conditions not predicted from the continuous solution. In general slip failure locations and directions do not follow the conventional wellbore breakout direction but depend upon the internal friction angle and the orientation of the bedding planes. When weakness plane is at 0° and 90° the behaviour are similar to that of a continuous rock material, but borehole instability is more severe when weakness planes are inclined at an angle between 0° and 90° to the horizontal. In conclusion, the results of the numerical simulations show that the prediction of local failure at the wall of the wellbore cannot disregard the presence of weakness planes and consequently the higher mud weight required for stability for any specific inclination of the joints. Despite the discrete approach can simulate smaller areas because of the large number of particles required for the generation of the rock material, however it seems to investigate more correctly the occurrence of failure at the miscroscale and eventually the propagation of the failed zone to a large portion of rock around the wellbore.

Keywords: continuous- discontinuous, numerical modelling, weakness planes wellbore, FLAC 2D

Procedia PDF Downloads 499
417 Applying Biculturalism in Studying Tourism Host Community Cultural Integrity and Individual Member Stress

Authors: Shawn P. Daly

Abstract:

Communities heavily engaged in the tourism industry discover their values intersect, meld, and conflict with those of visitors. Maintaining cultural integrity in the face of powerful external pressures causes stress among society members. This effect represents a less studied aspect of sustainable tourism. The present paper brings a perspective unique to the tourism literature: biculturalism. The grounded theories, coherent hypotheses, and validated constructs and indicators of biculturalism represent a sound base from which to consider sociocultural issues in sustainable tourism. Five models describe the psychological state of individuals operating at cultural crossroads: assimilation (joining the new culture), acculturation (grasping the new culture but remaining of the original culture), alternation (varying behavior to cultural context), multicultural (maintaining distinct cultures), and fusion (blending cultures). These five processes divide into two units of analysis (individual and society), permitting research questions at levels important for considering sociocultural sustainability. Acculturation modelling has morphed into dual processes of acculturation (new culture adaptation) and enculturation (original culture adaptation). This dichotomy divides sustainability research questions into human impacts from assimilation (acquiring new culture, throwing away original), separation (rejecting new culture, keeping original), integration (acquiring new culture, keeping original), and marginalization (rejecting new culture, throwing away original). Biculturalism is often cast in terms of its emotional, behavioral, and cognitive dimensions. Required cultural adjustments and varying levels of cultural competence lead to physical, psychological, and emotional outcomes, including depression, lowered life satisfaction and self-esteem, headaches, and back pain—or enhanced career success, social skills, and life styles. Numerous studies provide empirical scales and research hypotheses for sustainability research into tourism’s causality and effect on local well-being. One key issue in applying biculturalism to sustainability scholarship concerns identification and specification of the alternative new culture contacting local culture. Evidence exists for tourism industry, universal tourist, and location/event-specific tourist culture. The biculturalism paradigm holds promise for researchers examining evolving cultural identity and integrity in response to mass tourism. In particular, confirmed constructs and scales simplify operationalization of tourism sustainability studies in terms of human impact and adjustment.

Keywords: biculturalism, cultural integrity, psychological and sociocultural adjustment, tourist culture

Procedia PDF Downloads 409
416 Three Issues for Integrating Artificial Intelligence into Legal Reasoning

Authors: Fausto Morais

Abstract:

Artificial intelligence has been widely used in law. Programs are able to classify suits, to identify decision-making patterns, to predict outcomes, and to formalize legal arguments as well. In Brazil, the artificial intelligence victor has been classifying cases to supreme court’s standards. When those programs act doing those tasks, they simulate some kind of legal decision and legal arguments, raising doubts about how artificial intelligence can be integrated into legal reasoning. Taking this into account, the following three issues are identified; the problem of hypernormatization, the argument of legal anthropocentrism, and the artificial legal principles. Hypernormatization can be seen in the Brazilian legal context in the Supreme Court’s usage of the Victor program. This program generated efficiency and consistency. On the other hand, there is a feasible risk of over standardizing factual and normative legal features. Then legal clerks and programmers should work together to develop an adequate way to model legal language into computational code. If this is possible, intelligent programs may enact legal decisions in easy cases automatically cases, and, in this picture, the legal anthropocentrism argument takes place. Such an argument argues that just humans beings should enact legal decisions. This is so because human beings have a conscience, free will, and self unity. In spite of that, it is possible to argue against the anthropocentrism argument and to show how intelligent programs may work overcoming human beings' problems like misleading cognition, emotions, and lack of memory. In this way, intelligent machines could be able to pass legal decisions automatically by classification, as Victor in Brazil does, because they are binding by legal patterns and should not deviate from them. Notwithstanding, artificial intelligent programs can be helpful beyond easy cases. In hard cases, they are able to identify legal standards and legal arguments by using machine learning. For that, a dataset of legal decisions regarding a particular matter must be available, which is a reality in Brazilian Judiciary. Doing such procedure, artificial intelligent programs can support a human decision in hard cases, providing legal standards and arguments based on empirical evidence. Those legal features claim an argumentative weight in legal reasoning and should serve as references for judges when they must decide to maintain or overcome a legal standard.

Keywords: artificial intelligence, artificial legal principles, hypernormatization, legal anthropocentrism argument, legal reasoning

Procedia PDF Downloads 145
415 Effect of Primer on Bonding between Resin Cement and Zirconia Ceramic

Authors: Deog-Gyu Seo, Jin-Soo Ahn

Abstract:

Objectives: Recently, the development of adhesive primers on stable bonding between zirconia and resin cement has been on the increase. The bond strength of zirconia-resin cement can be effectively increased with the treatment of primer composed of the adhesive monomer that can chemically bond with the oxide layer, which forms on the surface of zirconia. 10-methacryloyloxydecyl dihydrogen phosphate (10-MDP) that contains phosphate ester and acidic monomer 4-methacryloxyethyl trimellitic anhydride(4-META) have been suggested as monomers that can form chemical bond with the surface oxide layer of zirconia. Also, these suggested monomers have proved to be effective zirconia surface treatment for bonding to resin cement. The purpose of this study is to evaluate the effects of primer treatment on the bond strength of Zirconia-resin cement by using three different kinds of primers on the market. Methods: Zirconia blocks were prepared into 60 disk-shaped specimens by using a diamond saw. Specimens were divided into four different groups: first three groups were treated with zirconiaLiner(Sun Medical Co., Ltd., Furutaka-cho, Moriyama, Shiga, Japan), Alloy primer (Kuraray Noritake Dental Inc., Sakaju, Kurashiki, Okayama, Japan), and Universal primer (Tokuyama dental Corp., Taitou, Taitou-ku, Tokyo, Japan) respectively. The last group was the control with no surface treatment. Dual cured resin cement (Biscem, Bisco Inc., Schaumburg, IL, USA) was luted to each group of specimens. And then, shear bond strengths were measured by universal tesing machine. The significance of the result was statistically analyzed by one-way ANOVA and Tukey test. The failure sites in each group were inspected under a magnifier. Results: Mean shear bond strength were 0.60, 1.39, 1.03, 1.38 MPa for control, Zirconia Liner (ZL), Alloy primer (AP), Universal primer (UP), respectively. Groups with application of each of the three primers showed significantly higher shear bond strength compared to the control group (p < 0.05). Among the three groups with the treatment, ZL and UP showed significantly higher shear bond strength than AP (p < 0.05), and there were no significant differences in mean shear bond strength between ZL and UP (p < 0.05). While the most specimens of control groups showed adhesive failure (80%), the most specimens of three primer-treated groups showed cohesive or mixed failure (80%).

Keywords: primer, resin cement, shear bond strength, zirconia

Procedia PDF Downloads 202
414 Kinematic Analysis of Human Gait for Typical Postures of Walking, Running and Cart Pulling

Authors: Nupur Karmaker, Hasin Aupama Azhari, Abdul Al Mortuza, Abhijit Chanda, Golam Abu Zakaria

Abstract:

Purpose: The purpose of gait analysis is to determine the biomechanics of the joint, phases of gait cycle, graphical and analytical analysis of degree of rotation, analysis of the electrical activity of muscles and force exerted on the hip joint at different locomotion during walking, running and cart pulling. Methods and Materials: Visual gait analysis and electromyography method has been used to detect the degree of rotation of joints and electrical activity of muscles. In cinematography method an object is observed from different sides and takes its video. Cart pulling length has been divided into frames with respect to time by using video splitter software. Phases of gait cycle, degree of rotation of joints, EMG profile and force analysis during walking and running has been taken from different papers. Gait cycle and degree of rotation of joints during cart pulling has been prepared by using video camera, stop watch, video splitter software and Microsoft Excel. Results and Discussion: During the cart pulling the force exerted on hip is the resultant of various forces. The force on hip is the vector sum of the force Fg= mg, due the body of weight of the person and Fa= ma, due to the velocity. Maximum stance phase shows during cart pulling and minimum shows during running. During cart pulling shows maximum degree of rotation of hip joint, knee: running, and ankle: cart pulling. During walking, it has been observed minimum degree of rotation of hip, ankle: during running. During cart pulling, dynamic force depends on the walking velocity, body weight and load weight. Conclusions: 80% people suffer gait related disease with increasing their age. Proper care should take during cart pulling. It will be better to establish the gait laboratory to determine the gait related diseases. If the way of cart pulling is changed i.e the design of cart pulling machine, load bearing system is changed then it would possible to reduce the risk of limb loss, flat foot syndrome and varicose vein in lower limb.

Keywords: kinematic, gait, gait lab, phase, force analysis

Procedia PDF Downloads 576
413 Exploring Pre-Trained Automatic Speech Recognition Model HuBERT for Early Alzheimer’s Disease and Mild Cognitive Impairment Detection in Speech

Authors: Monica Gonzalez Machorro

Abstract:

Dementia is hard to diagnose because of the lack of early physical symptoms. Early dementia recognition is key to improving the living condition of patients. Speech technology is considered a valuable biomarker for this challenge. Recent works have utilized conventional acoustic features and machine learning methods to detect dementia in speech. BERT-like classifiers have reported the most promising performance. One constraint, nonetheless, is that these studies are either based on human transcripts or on transcripts produced by automatic speech recognition (ASR) systems. This research contribution is to explore a method that does not require transcriptions to detect early Alzheimer’s disease (AD) and mild cognitive impairment (MCI). This is achieved by fine-tuning a pre-trained ASR model for the downstream early AD and MCI tasks. To do so, a subset of the thoroughly studied Pitt Corpus is customized. The subset is balanced for class, age, and gender. Data processing also involves cropping the samples into 10-second segments. For comparison purposes, a baseline model is defined by training and testing a Random Forest with 20 extracted acoustic features using the librosa library implemented in Python. These are: zero-crossing rate, MFCCs, spectral bandwidth, spectral centroid, root mean square, and short-time Fourier transform. The baseline model achieved a 58% accuracy. To fine-tune HuBERT as a classifier, an average pooling strategy is employed to merge the 3D representations from audio into 2D representations, and a linear layer is added. The pre-trained model used is ‘hubert-large-ls960-ft’. Empirically, the number of epochs selected is 5, and the batch size defined is 1. Experiments show that our proposed method reaches a 69% balanced accuracy. This suggests that the linguistic and speech information encoded in the self-supervised ASR-based model is able to learn acoustic cues of AD and MCI.

Keywords: automatic speech recognition, early Alzheimer’s recognition, mild cognitive impairment, speech impairment

Procedia PDF Downloads 127
412 Strategies to Mitigate Disasters at the Hajj Religious Festival Using GIS and Agent Based Modelling

Authors: Muteb Alotaibi, Graham Clarke, Nick Malleson

Abstract:

The Hajj religious festival at Mina in Saudi Arabia has always presented the opportunity for injuries or deaths. For example, in 1990, a stampede killed 1426 pilgrims, whilst in 1997, 343 people were killed and 1500 injured due to a fire fuelled by high winds sweeping through the tent city in Mina.Many more minor incidents have occurred since then. It is predicted that 5 million pilgrims will soon perform the ritual at Mina (which is, in effect, a temporary city built each year in the desert), which might lead in the future to severe congestion and accidents unless the research is conducted on actions that contribute positively to improving the management of the crowd and facilitating the flow of pilgrims safely and securely. To help prevent further disasters, it is important to first plan better, more accessible locations for emergency services across Mina to ensure a good service for pilgrims. In this paper, we first use a Location Allocation Model (LAM) within a network GIS to examine the optimal locations for key services in the temporary city of Mina. This has been undertaken in relation to the location and movement of the pilgrims during the six day religious festival. The results of various what-if scenarios have been compared against the current location of services. A major argument is that planners should be flexible and locate facilities at different locations throughout the day and night. The use of location-allocation models in this type of comparative static mode has rarely been operationalised in the literature. Second, we model pilgrim movements and behaviours along with the most crowded parts of the network. This has been modelled using an agent-based model. This model allows planners to understand the key bottlenecks in the network and at what usage levels the paths become critically congested. Thus the paper has important implications and recommendations for future disaster planning strategies. This will enable planners to see at what stage in the movements of pilgrims problems occur in terms of potential crushes and trampling incidents. The main application of this research was only customised for pedestrians as the concentration only for pedestrians who move to Jamarat via foot. Further, the network in the middle of Mina was only dedicated for pedestrians for safety, so no Buses, trains and private cars were allowed in this area to prevent the congestion within this network. Initially, this research focus on Mina city as ‘temporary city’ and also about service provision in temporary cities, which is not highlighted in literature so far. Further, it is the first study which use the dynamic demand to optimise the services in the case of day and night time. Moreover, it is the first study which link the location allocation model for optimising services with ABM to find out whether or not the service location is located in the proper location in which it’s not affecting on crowd movement in mainstream flow where some pilgrims need to have health services.

Keywords: ABM, crowd management, hajj, temporary city

Procedia PDF Downloads 122
411 Advancing Aviation: A Multidisciplinary Approach to Innovation, Management, and Technology Integration in the 21st Century

Authors: Fatih Frank Alparslan

Abstract:

The aviation industry is at a crucial turning point due to modern technologies, environmental concerns, and changing ways of transporting people and goods globally. The paper examines these challenges and opportunities comprehensively. It emphasizes the role of innovative management and advanced technology in shaping the future of air travel. This study begins with an overview of the current state of the aviation industry, identifying key areas where innovation and technology could be highly beneficial. It explores the latest advancements in airplane design, propulsion, and materials. These technological advancements are shown to enhance aircraft performance and environmental sustainability. The paper also discusses the use of artificial intelligence and machine learning in improving air traffic control, enhancing safety, and making flight operations more efficient. The management of these technologies is critically important. Therefore, the research delves into necessary changes in organization, culture, and operations to support innovation. It proposes a management approach that aligns with these modern technologies, underlining the importance of forward-thinking leaders who collaborate across disciplines and embrace innovative ideas. The paper addresses challenges in adopting these innovations, such as regulatory barriers, the need for industry-wide standards, and the impact of technological changes on jobs and society. It recommends that governments, aviation businesses, and educational institutions collaborate to address these challenges effectively, paving the way for a more innovative and eco-friendly aviation industry. In conclusion, the paper argues that the future of aviation relies on integrating new management practices with innovative technologies. It urges a collective effort to push beyond current capabilities, envisioning an aviation industry that is safer, more efficient, and environmentally responsible. By adopting a broad approach, this research contributes to the ongoing discussion about resolving the complex issues facing today's aviation sector, offering insights and guidance to prepare for future advancements.

Keywords: aviation innovation, technology integration, environmental sustainability, management strategies, multidisciplinary approach

Procedia PDF Downloads 48
410 Investigation of the Mechanical and Thermal Properties of a Silver Oxalate Nanoporous Structured Sintered Joint for Micro-joining in Relation to the Sintering Process Parameters

Authors: L. Vivet, L. Benabou, O. Simon

Abstract:

With highly demanding applications in the field of power electronics, there is an increasing need to have interconnection materials with properties that can ensure both good mechanical assembly and high thermal/electrical conductivities. So far, lead-free solders have been considered an attractive solution, but recently, sintered joints based on nano-silver paste have been used for die attach and have proved to be a promising solution offering increased performances in high-temperature applications. In this work, the main parameters of the bonding process using silver oxalates are studied, i.e., the heating rate and the bonding pressure mainly. Their effects on both the mechanical and thermal properties of the sintered layer are evaluated following an experimental design. Pairs of copper substrates with gold metallization are assembled through the sintering process to realize the samples that are tested using a micro-traction machine. In addition, the obtained joints are examined through microscopy to identify the important microstructural features in relation to the measured properties. The formation of an intermetallic compound at the junction between the sintered silver layer and the gold metallization deposited on copper is also analyzed. Microscopy analysis exhibits a nanoporous structure of the sintered material. It is found that higher temperature and bonding pressure result in higher densification of the sintered material, with higher thermal conductivity of the joint but less mechanical flexibility to accommodate the thermo-mechanical stresses arising during service. The experimental design allows hence the determination of the optimal process parameters to reach sufficient thermal/mechanical properties for a given application. It is also found that the interphase formed between silver and gold metallization is the location where the fracture occurred after the mechanical testing, suggesting that the inter-diffusion mechanism between the different elements of the assembly leads to the formation of a relatively brittle compound.

Keywords: nanoporous structure, silver oxalate, sintering, mechanical strength, thermal conductivity, microelectronic packaging

Procedia PDF Downloads 93
409 Embedded Visual Perception for Autonomous Agricultural Machines Using Lightweight Convolutional Neural Networks

Authors: René A. Sørensen, Søren Skovsen, Peter Christiansen, Henrik Karstoft

Abstract:

Autonomous agricultural machines act in stochastic surroundings and therefore, must be able to perceive the surroundings in real time. This perception can be achieved using image sensors combined with advanced machine learning, in particular Deep Learning. Deep convolutional neural networks excel in labeling and perceiving color images and since the cost of high-quality RGB-cameras is low, the hardware cost of good perception depends heavily on memory and computation power. This paper investigates the possibility of designing lightweight convolutional neural networks for semantic segmentation (pixel wise classification) with reduced hardware requirements, to allow for embedded usage in autonomous agricultural machines. Using compression techniques, a lightweight convolutional neural network is designed to perform real-time semantic segmentation on an embedded platform. The network is trained on two large datasets, ImageNet and Pascal Context, to recognize up to 400 individual classes. The 400 classes are remapped into agricultural superclasses (e.g. human, animal, sky, road, field, shelterbelt and obstacle) and the ability to provide accurate real-time perception of agricultural surroundings is studied. The network is applied to the case of autonomous grass mowing using the NVIDIA Tegra X1 embedded platform. Feeding case-specific images to the network results in a fully segmented map of the superclasses in the image. As the network is still being designed and optimized, only a qualitative analysis of the method is complete at the abstract submission deadline. Proceeding this deadline, the finalized design is quantitatively evaluated on 20 annotated grass mowing images. Lightweight convolutional neural networks for semantic segmentation can be implemented on an embedded platform and show competitive performance with regards to accuracy and speed. It is feasible to provide cost-efficient perceptive capabilities related to semantic segmentation for autonomous agricultural machines.

Keywords: autonomous agricultural machines, deep learning, safety, visual perception

Procedia PDF Downloads 394
408 Regional Rates of Sand Supply to the New South Wales Coast: Southeastern Australia

Authors: Marta Ribo, Ian D. Goodwin, Thomas Mortlock, Phil O’Brien

Abstract:

Coastal behavior is best investigated using a sediment budget approach, based on the identification of sediment sources and sinks. Grain size distribution over the New South Wales (NSW) continental shelf has been widely characterized since the 1970’s. Coarser sediment has generally accumulated on the outer shelf, and/or nearshore zones, with the latter related to the presence of nearshore reef and bedrocks. The central part of the NSW shelf is characterized by the presence of fine sediments distributed parallel to the coastline. This study presents new grain size distribution maps along the NSW continental shelf, built using all available NSW and Commonwealth Government holdings. All available seabed bathymetric data form prior projects, single and multibeam sonar, and aerial LiDAR surveys were integrated into a single bathymetric surface for the NSW continental shelf. Grain size information was extracted from the sediment sample data collected in more than 30 studies. The information extracted from the sediment collections varied between reports. Thus, given the inconsistency of the grain size data, a common grain size classification was her defined using the phi scale. The new sediment distribution maps produced, together with new detailed seabed bathymetric data enabled us to revise the delineation of sediment compartments to more accurately reflect the true nature of sediment movement on the inner shelf and nearshore. Accordingly, nine primary mega coastal compartments were delineated along the NSW coast and shelf. The sediment compartments are bounded by prominent nearshore headlands and reefs, and major river and estuarine inlets that act as sediment sources and/or sinks. The new sediment grain size distribution was used as an input in the morphological modelling to quantify the sediment transport patterns (and indicative rates of transport), used to investigate sand supply rates and processes from the lower shoreface to the NSW coast. The rate of sand supply to the NSW coast from deep water is a major uncertainty in projecting future coastal response to sea-level rise. Offshore transport of sand is generally expected as beaches respond to rising sea levels but an onshore supply from the lower shoreface has the potential to offset some of the impacts of sea-level rise, such as coastline recession. Sediment exchange between the lower shoreface and sub-aerial beach has been modelled across the south, central, mid-north and far-north coast of NSW. Our model approach is that high-energy storm events are the primary agents of sand transport in deep water, while non-storm conditions are responsible for re-distributing sand within the beach and surf zone.

Keywords: New South Wales coast, off-shore transport, sand supply, sediment distribution maps

Procedia PDF Downloads 227
407 Design of an Ultra High Frequency Rectifier for Wireless Power Systems by Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Ícaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

There is a dispersed energy in Radio Frequencies (RF) that can be reused to power electronics circuits such as: sensors, actuators, identification devices, among other systems, without wire connections or a battery supply requirement. In this context, there are different types of energy harvesting systems, including rectennas, coil systems, graphene and new materials. A secondary step of an energy harvesting system is the rectification of the collected signal which may be carried out, for example, by the combination of one or more Schottky diodes connected in series or shunt. In the case of a rectenna-based system, for instance, the diode used must be able to receive low power signals at ultra-high frequencies. Therefore, it is required low values of series resistance, junction capacitance and potential barrier voltage. Due to this low-power condition, voltage multiplier configurations are used such as voltage doublers or modified bridge converters. Lowpass filter (LPF) at the input, DC output filter, and a resistive load are also commonly used in the rectifier design. The electronic circuits projects are commonly analyzed through simulation in SPICE (Simulation Program with Integrated Circuit Emphasis) environment. Despite the remarkable potential of SPICE-based simulators for complex circuit modeling and analysis of quasi-static electromagnetic fields interaction, i.e., at low frequency, these simulators are limited and they cannot model properly applications of microwave hybrid circuits in which there are both, lumped elements as well as distributed elements. This work proposes, therefore, the electromagnetic modelling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-high frequencies, with application in rectifiers coupled to antennas, as in energy harvesting systems, that is, in rectennas. For this purpose, the numerical method FDTD (Finite-Difference Time-Domain) is applied and SPICE computational tools are used for comparison. In the present work, initially the Ampere-Maxwell equation is applied to the equations of current density and electric field within the FDTD method and its circuital relation with the voltage drop in the modeled component for the case of lumped parameter using the FDTD (Lumped-Element Finite-Difference Time-Domain) proposed in for the passive components and the one proposed in for the diode. Next, a rectifier is built with the essential requirements for operating rectenna energy harvesting systems and the FDTD results are compared with experimental measurements.

Keywords: energy harvesting system, LE-FDTD, rectenna, rectifier, wireless power systems

Procedia PDF Downloads 130
406 The Psychometric Properties of an Instrument to Estimate Performance in Ball Tasks Objectively

Authors: Kougioumtzis Konstantin, Rylander Pär, Karlsteen Magnus

Abstract:

Ball skills as a subset of fundamental motor skills are predictors for performance in sports. Currently, most tools evaluate ball skills utilizing subjective ratings. The aim of this study was to examine the psychometric properties of a newly developed instrument to objectively measure ball handling skills (BHS-test) utilizing digital instrument. Participants were a convenience sample of 213 adolescents (age M = 17.1 years, SD =3.6; 55% females, 45% males) recruited from upper secondary schools and invited to a sports hall for the assessment. The 8-item instrument incorporated both accuracy-based ball skill tests and repetitive-performance tests with a ball. Testers counted performance manually in the four tests (one throwing and three juggling tasks). Furthermore, assessment was technologically enhanced in the other four tests utilizing a ball machine, a Kinect camera and balls with motion sensors (one balancing and three rolling tasks). 3D printing technology was used to construct equipment, while all results were administered digitally with smart phones/tablets, computers and a specially constructed application to send data to a server. The instrument was deemed reliable (α = .77) and principal component analysis was used in a random subset (53 of the participants). Furthermore, latent variable modeling was employed to confirm the structure with the remaining subset (160 of the participants). The analysis showed good factorial-related validity with one factor explaining 57.90 % of the total variance. Four loadings were larger than .80, two more exceeded .76 and the other two were .65 and .49. The one factor solution was confirmed by a first order model with one general factor and an excellent fit between model and data (χ² = 16.12, DF = 20; RMSEA = .00, CI90 .00–.05; CFI = 1.00; SRMR = .02). The loadings on the general factor ranged between .65 and .83. Our findings indicate good reliability and construct validity for the BHS-test. To develop the instrument further, more studies are needed with various age-groups, e.g. children. We suggest using the BHS-test for diagnostic or assessment purpose for talent development and sports participation interventions that focus on ball games.

Keywords: ball-handling skills, ball-handling ability, technologically-enhanced measurements, assessment

Procedia PDF Downloads 94
405 A Method for Clinical Concept Extraction from Medical Text

Authors: Moshe Wasserblat, Jonathan Mamou, Oren Pereg

Abstract:

Natural Language Processing (NLP) has made a major leap in the last few years, in practical integration into medical solutions; for example, extracting clinical concepts from medical texts such as medical condition, medication, treatment, and symptoms. However, training and deploying those models in real environments still demands a large amount of annotated data and NLP/Machine Learning (ML) expertise, which makes this process costly and time-consuming. We present a practical and efficient method for clinical concept extraction that does not require costly labeled data nor ML expertise. The method includes three steps: Step 1- the user injects a large in-domain text corpus (e.g., PubMed). Then, the system builds a contextual model containing vector representations of concepts in the corpus, in an unsupervised manner (e.g., Phrase2Vec). Step 2- the user provides a seed set of terms representing a specific medical concept (e.g., for the concept of the symptoms, the user may provide: ‘dry mouth,’ ‘itchy skin,’ and ‘blurred vision’). Then, the system matches the seed set against the contextual model and extracts the most semantically similar terms (e.g., additional symptoms). The result is a complete set of terms related to the medical concept. Step 3 –in production, there is a need to extract medical concepts from the unseen medical text. The system extracts key-phrases from the new text, then matches them against the complete set of terms from step 2, and the most semantically similar will be annotated with the same medical concept category. As an example, the seed symptom concepts would result in the following annotation: “The patient complaints on fatigue [symptom], dry skin [symptom], and Weight loss [symptom], which can be an early sign for Diabetes.” Our evaluations show promising results for extracting concepts from medical corpora. The method allows medical analysts to easily and efficiently build taxonomies (in step 2) representing their domain-specific concepts, and automatically annotate a large number of texts (in step 3) for classification/summarization of medical reports.

Keywords: clinical concepts, concept expansion, medical records annotation, medical records summarization

Procedia PDF Downloads 135
404 Methodical Approach for the Integration of a Digital Factory Twin into the Industry 4.0 Processes

Authors: R. Hellmuth

Abstract:

The orientation of flexibility and adaptability with regard to factory planning is at machine and process level. Factory buildings are not the focus of current research. Factory planning has the task of designing products, plants, processes, organization, areas and the construction of a factory. The adaptability of a factory can be divided into three types: spatial, organizational and technical adaptability. Spatial adaptability indicates the ability to expand and reduce the size of a factory. Here, the area-related breathing capacity plays the essential role. It mainly concerns the factory site, the plant layout and the production layout. The organizational ability to change enables the change and adaptation of organizational structures and processes. This includes structural and process organization as well as logistical processes and principles. New and reconfigurable operating resources, processes and factory buildings are referred to as technical adaptability. These three types of adaptability can be regarded independently of each other as undirected potentials of different characteristics. If there is a need for change, the types of changeability in the change process are combined to form a directed, complementary variable that makes change possible. When planning adaptability, importance must be attached to a balance between the types of adaptability. The vision of the intelligent factory building and the 'Internet of Things' presupposes the comprehensive digitalization of the spatial and technical environment. Through connectivity, the factory building must be empowered to support a company's value creation process by providing media such as light, electricity, heat, refrigeration, etc. In the future, communication with the surrounding factory building will take place on a digital or automated basis. In the area of industry 4.0, the function of the building envelope belongs to secondary or even tertiary processes, but these processes must also be included in the communication cycle. An integrative view of a continuous communication of primary, secondary and tertiary processes is currently not yet available and is being developed with the aid of methods in this research work. A comparison of the digital twin from the point of view of production and the factory building will be developed. Subsequently, a tool will be elaborated to classify digital twins from the perspective of data, degree of visualization, and the trades. Thus a contribution is made to better integrate the secondary and tertiary processes in a factory into the added value.

Keywords: adaptability, digital factory twin, factory planning, industry 4.0

Procedia PDF Downloads 156
403 Crime Prevention with Artificial Intelligence

Authors: Mehrnoosh Abouzari, Shahrokh Sahraei

Abstract:

Today, with the increase in quantity and quality and variety of crimes, the discussion of crime prevention has faced a serious challenge that human resources alone and with traditional methods will not be effective. One of the developments in the modern world is the presence of artificial intelligence in various fields, including criminal law. In fact, the use of artificial intelligence in criminal investigations and fighting crime is a necessity in today's world. The use of artificial intelligence is far beyond and even separate from other technologies in the struggle against crime. Second, its application in criminal science is different from the discussion of prevention and it comes to the prediction of crime. Crime prevention in terms of the three factors of the offender, the offender and the victim, following a change in the conditions of the three factors, based on the perception of the criminal being wise, and therefore increasing the cost and risk of crime for him in order to desist from delinquency or to make the victim aware of self-care and possibility of exposing him to danger or making it difficult to commit crimes. While the presence of artificial intelligence in the field of combating crime and social damage and dangers, like an all-seeing eye, regardless of time and place, it sees the future and predicts the occurrence of a possible crime, thus prevent the occurrence of crimes. The purpose of this article is to collect and analyze the studies conducted on the use of artificial intelligence in predicting and preventing crime. How capable is this technology in predicting crime and preventing it? The results have shown that the artificial intelligence technologies in use are capable of predicting and preventing crime and can find patterns in the data set. find large ones in a much more efficient way than humans. In crime prediction and prevention, the term artificial intelligence can be used to refer to the increasing use of technologies that apply algorithms to large sets of data to assist or replace police. The use of artificial intelligence in our debate is in predicting and preventing crime, including predicting the time and place of future criminal activities, effective identification of patterns and accurate prediction of future behavior through data mining, machine learning and deep learning, and data analysis, and also the use of neural networks. Because the knowledge of criminologists can provide insight into risk factors for criminal behavior, among other issues, computer scientists can match this knowledge with the datasets that artificial intelligence uses to inform them.

Keywords: artificial intelligence, criminology, crime, prevention, prediction

Procedia PDF Downloads 75
402 Quantification of the Erosion Effect on Small Caliber Guns: Experimental and Numerical Analysis

Authors: Dhouibi Mohamed, Stirbu Bogdan, Chabotier André, Pirlot Marc

Abstract:

Effects of erosion and wear on the performance of small caliber guns have been analyzed throughout numerical and experimental studies. Mainly, qualitative observations were performed. Correlations between the volume change of the chamber and the maximum pressure are limited. This paper focuses on the development of a numerical model to predict the maximum pressure evolution when the interior shape of the chamber changes in the different weapon’s life phases. To fulfill this goal, an experimental campaign, followed by a numerical simulation study, is carried out. Two test barrels, « 5.56x45mm NATO » and « 7.62x51mm NATO,» are considered. First, a Coordinate Measuring Machine (CMM) with a contact scanning probe is used to measure the interior profile of the barrels after each 300-shots cycle until their worn out. Simultaneously, the EPVAT (Electronic Pressure Velocity and Action Time) method with a special WEIBEL radar are used to measure: (i) the chamber pressure, (ii) the action time, (iii) and the bullet velocity in each barrel. Second, a numerical simulation study is carried out. Thus, a coupled interior ballistic model is developed using the dynamic finite element program LS-DYNA. In this work, two different models are elaborated: (i) coupled Eularien Lagrangian method using fluid-structure interaction (FSI) techniques and a coupled thermo-mechanical finite element using a lumped parameter model (LPM) as a subroutine. Those numerical models are validated and checked through three experimental results, such as (i) the muzzle velocity, (ii) the chamber pressure, and (iii) the surface morphology of fired projectiles. Results show a good agreement between experiments and numerical simulations. Next, a comparison between the two models is conducted. The projectile motions, the dynamic engraving resistances and the maximum pressures are compared and analyzed. Finally, using this obtained database, a statistical correlation between the muzzle velocity, the maximum pressure and the chamber volume is established.

Keywords: engraving process, finite element analysis, gun barrel erosion, interior ballistics, statistical correlation

Procedia PDF Downloads 213
401 Advancements in Mathematical Modeling and Optimization for Control, Signal Processing, and Energy Systems

Authors: Zahid Ullah, Atlas Khan

Abstract:

This abstract focuses on the advancements in mathematical modeling and optimization techniques that play a crucial role in enhancing the efficiency, reliability, and performance of these systems. In this era of rapidly evolving technology, mathematical modeling and optimization offer powerful tools to tackle the complex challenges faced by control, signal processing, and energy systems. This abstract presents the latest research and developments in mathematical methodologies, encompassing areas such as control theory, system identification, signal processing algorithms, and energy optimization. The abstract highlights the interdisciplinary nature of mathematical modeling and optimization, showcasing their applications in a wide range of domains, including power systems, communication networks, industrial automation, and renewable energy. It explores key mathematical techniques, such as linear and nonlinear programming, convex optimization, stochastic modeling, and numerical algorithms, that enable the design, analysis, and optimization of complex control and signal processing systems. Furthermore, the abstract emphasizes the importance of addressing real-world challenges in control, signal processing, and energy systems through innovative mathematical approaches. It discusses the integration of mathematical models with data-driven approaches, machine learning, and artificial intelligence to enhance system performance, adaptability, and decision-making capabilities. The abstract also underscores the significance of bridging the gap between theoretical advancements and practical applications. It recognizes the need for practical implementation of mathematical models and optimization algorithms in real-world systems, considering factors such as scalability, computational efficiency, and robustness. In summary, this abstract showcases the advancements in mathematical modeling and optimization techniques for control, signal processing, and energy systems. It highlights the interdisciplinary nature of these techniques, their applications across various domains, and their potential to address real-world challenges. The abstract emphasizes the importance of practical implementation and integration with emerging technologies to drive innovation and improve the performance of control, signal processing, and energy.

Keywords: mathematical modeling, optimization, control systems, signal processing, energy systems, interdisciplinary applications, system identification, numerical algorithms

Procedia PDF Downloads 112
400 Agent-Based Modeling Investigating Self-Organization in Open, Non-equilibrium Thermodynamic Systems

Authors: Georgi Y. Georgiev, Matthew Brouillet

Abstract:

This research applies the power of agent-based modeling to a pivotal question at the intersection of biology, computer science, physics, and complex systems theory about the self-organization processes in open, complex, non-equilibrium thermodynamic systems. Central to this investigation is the principle of Maximum Entropy Production (MEP). This principle suggests that such systems evolve toward states that optimize entropy production, leading to the formation of structured environments. It is hypothesized that guided by the least action principle, open thermodynamic systems identify and follow the shortest paths to transmit energy and matter, resulting in maximal entropy production, internal structure formation, and a decrease in internal entropy. Concurrently, it is predicted that there will be an increase in system information as more information is required to describe the developing structure. To test this, an agent-based model is developed simulating an ant colony's formation of a path between a food source and its nest. Utilizing the Netlogo software for modeling and Python for data analysis and visualization, self-organization is quantified by calculating the decrease in system entropy based on the potential states and distribution of the ants within the simulated environment. External entropy production is also evaluated for information increase and efficiency improvements in the system's action. Simulations demonstrated that the system begins at maximal entropy, which decreases as the ants form paths over time. A range of system behaviors contingent upon the number of ants are observed. Notably, no path formation occurred with fewer than five ants, whereas clear paths were established by 200 ants, and saturation of path formation and entropy state was reached at populations exceeding 1000 ants. This analytical approach identified the inflection point marking the transition from disorder to order and computed the slope at this point. Combined with extrapolation to the final path entropy, these parameters yield important insights into the eventual entropy state of the system and the timeframe for its establishment, enabling the estimation of the self-organization rate. This study provides a novel perspective on the exploration of self-organization in thermodynamic systems, establishing a correlation between internal entropy decrease rate and external entropy production rate. Moreover, it presents a flexible framework for assessing the impact of external factors like changes in world size, path obstacles, and friction. Overall, this research offers a robust, replicable model for studying self-organization processes in any open thermodynamic system. As such, it provides a foundation for further in-depth exploration of the complex behaviors of these systems and contributes to the development of more efficient self-organizing systems across various scientific fields.

Keywords: complexity, self-organization, agent based modelling, efficiency

Procedia PDF Downloads 68
399 Suitability of Wood Sawdust Waste Reinforced Polymer Composite for Fireproof Doors

Authors: Timine Suoware, Sylvester Edelugo, Charles Amgbari

Abstract:

The susceptibility of natural fibre polymer composites to flame has necessitated research to improve and develop flame retardant (FR) to delay the escape of combustible volatiles. Previous approaches relied mostly on FR such as aluminium tri-hydroxide (ATH) and ammonium polyphosphate (APP) to improve fire performances of wood sawdust polymer composites (WSPC) with emphasis on non-structural building applications. In this paper, APP was modified with gum Arabic powder (GAP) and then hybridized with ATH at 0, 12 and 18% loading ratio to form new FR species; WSPC12%APP-GAP and WSPC18%ATH/APP-GAP. The FR species were incorporated in wood sawdust waste reinforced in polyester resin to form panels for fireproof doors. The panels were produced using hand lay compression moulding technique and cured at room temperature. Specimen cut from panels were then tested for tensile strength (TS), flexural strength (FS) and impact strength (IS) using universal testing machine and impact tester; thermal stability using (TGA/DSC 1: Metler Toledo); time-to-ignition (Tig), heat release rates (HRR); peak HRR (HRRp), average HRR (HRRavg), total HRR (THR), peak mass loss rate (MLRp), average smoke production rate (SPRavg) and carbon monoxide production (COP ) were obtained using the cone calorimeter apparatus. From the mechanical properties obtained, improvements of IS for the panels were not noticeable whereas TS and FS for WSPC12%APP-GAP respectively stood at 12.44 MPa and 85.58 MPa more than those without FR (WSPC0%). For WSC18%ATH/APP-GAP TS and FS respectively stood at 16.45 MPa and 50.49 MPa more compared to (WSPC0%). From the thermal analysis, the panels did not exhibit any significant change as early degradation was observed. At 900 OC, the char residues improved by 15% for WSPC12%APP-GAP and 19% for WSPC18%ATH/APP-GAP more than (WSC0%) at 5%, confirming the APP-GAP to be a good FR. At 50 kW/m2 heat flux (HF), WSPC12%APP-GAP improved better the fire behaviour of the panels when compared to WSC0% as follows; Tig = 46 s, HRRp = 56.1 kW/2, HRRavg = 32.8 kW/m2, THR = 66.6 MJ/m2, MLRp = 0.103 g/s, TSR = 0.04 m2/s and COP = 0.051 kg/kg. These were respectively more than WSC0%. It can be concluded that the new concept of modifying FR with GAP in WSC could meet the requirement of a fireproof door for building applications.

Keywords: composite, flame retardant, wood sawdust, fireproof doors

Procedia PDF Downloads 107
398 Analyzing the Effects of Bio-fibers on the Stiffness and Strength of Adhesively Bonded Thermoplastic Bio-fiber Reinforced Composites by a Mixed Experimental-Numerical Approach

Authors: Sofie Verstraete, Stijn Debruyne, Frederik Desplentere

Abstract:

Considering environmental issues, the interest to apply sustainable materials in industry increases. Specifically for composites, there is an emerging need for suitable materials and bonding techniques. As an alternative to traditional composites, short bio-fiber (cellulose-based flax) reinforced Polylactic Acid (PLA) is gaining popularity. However, these thermoplastic based composites show issues in adhesive bonding. This research focusses on analyzing the effects of the fibers near the bonding interphase. The research applies injection molded plate structures. A first important parameter concerns the fiber volume fraction, which directly affects adhesion characteristics of the surface. This parameter is varied between 0 (pure PLA) and 30%. Next to fiber volume fraction, the orientation of fibers near the bonding surface governs the adhesion characteristics of the injection molded parts. This parameter is not directly controlled in this work, but its effects are analyzed. Surface roughness also greatly determines surface wettability, thus adhesion. Therefore, this research work considers three different roughness conditions. Different mechanical treatments yield values up to 0.5 mm. In this preliminary research, only one adhesive type is considered. This is a two-part epoxy which is cured at 23 °C for 48 hours. In order to assure a dedicated parametric study, simple and reproduceable adhesive bonds are manufactured. Both single lap (substrate width 25 mm, thickness 3 mm, overlap length 10 mm) and double lap tests are considered since these are well documented and quite straightforward to conduct. These tests are conducted for the different substrate and surface conditions. Dog bone tensile testing is applied to retrieve the stiffness and strength characteristics of the substrates (with different fiber volume fractions). Numerical modelling (non-linear FEA) relates the effects of the considered parameters on the stiffness and strength of the different joints, obtained through the abovementioned tests. Ongoing work deals with developing dedicated numerical models, incorporating the different considered adhesion parameters. Although this work is the start of an extensive research project on the bonding characteristics of thermoplastic bio-fiber reinforced composites, some interesting results are already prominent. Firstly, a clear correlation between the surface roughness and the wettability of the substrates is observed. Given the adhesive type (and viscosity), it is noticed that an increase in surface energy is proportional to the surface roughness, to some extent. This becomes more pronounced when fiber volume fraction increases. Secondly, ultimate bond strength (single lap) also increases with increasing fiber volume fraction. On a macroscopic level, this confirms the positive effect of fibers near the adhesive bond line.

Keywords: adhesive bonding, bio-fiber reinforced composite, flax fibers, lap joint

Procedia PDF Downloads 127
397 Load-Deflecting Characteristics of a Fabricated Orthodontic Wire with 50.6Ni 49.4Ti Alloy Composition

Authors: Aphinan Phukaoluan, Surachai Dechkunakorn, Niwat Anuwongnukroh, Anak Khantachawana, Pongpan Kaewtathip, Julathep Kajornchaiyakul, Peerapong Tua-Ngam

Abstract:

Aims: The objectives of this study was to determine the load-deflecting characteristics of a fabricated orthodontic wire with alloy composition of 50.6% (atomic weight) Ni and 49.4% (atomic weight) Ti and to compare the results with Ormco, a commercially available pre-formed NiTi orthodontic archwire. Materials and Methods: The ingots alloys with atomic weight ratio 50.6 Ni: 49.4 Ti alloy were used in this study. Three specimens were cut to have wire dimensions of 0.016 inch x0.022 inch. For comparison, a commercially available pre-formed NiTi archwire, Ormco, with dimensions of 0.016 inch x 0.022 inch was used. Three-point bending tests were performed at the temperature 36+1 °C using a Universal Testing Machine on the newly fabricated and commercial archwires to assess the characteristics of the load-deflection curve with loading and unloading forces. The loading and unloading features at the deflection points 0.25, 0.50, 0.75. 1.0, 1.25, and 1.5 mm were compared. Descriptive statistics was used to evaluate each variables, and independent t-test at p < 0.05 was used to analyze the mean differences between the two groups. Results: The load-deflection curve of the 50.6Ni: 49.4Ti wires exhibited the characteristic features of superelasticity. The curves at the loading and unloading slope of Ormco NiTi archwire were more parallel than the newly fabricated NiTi wires. The average deflection force of the 50.6Ni: 49.4Ti wire was 304.98 g and 208.08 g for loading and unloading, respectively. Similarly, the values were 358.02 g loading and 253.98 g for unloading of Ormco NiTi archwire. The interval difference forces between each deflection points were in the range 20.40-121.38 g and 36.72-92.82 g for the loading and unloading curve of 50.6Ni: 49.4Ti wire, respectively, and 4.08-157.08 g and 14.28-90.78 g for the loading and unloading curve of commercial wire, respectively. The average deflection force of the 50.6Ni: 49.4Ti wire was less than that of Ormco NiTi archwire, which could have been due to variations in the wire dimensions. Although a greater force was required for each deflection point of loading and unloading for the 50.6Ni: 49.4Ti wire as compared to Ormco NiTi archwire, the values were still within the acceptable limits to be clinically used in orthodontic treatment. Conclusion: The 50.6Ni: 49.4Ti wires presented the characteristics of a superelastic orthodontic wire. The loading and unloading force were also suitable for orthodontic tooth movement. These results serve as a suitable foundation for further studies in the development of new orthodontic NiTi archwires.

Keywords: 50.6 ni 49.4 Ti alloy wire, load deflection curve, loading and unloading force, orthodontic

Procedia PDF Downloads 303
396 Evaluation of the Self-Organizing Map and the Adaptive Neuro-Fuzzy Inference System Machine Learning Techniques for the Estimation of Crop Water Stress Index of Wheat under Varying Application of Irrigation Water Levels for Efficient Irrigation Scheduling

Authors: Aschalew C. Workneh, K. S. Hari Prasad, C. S. P. Ojha

Abstract:

The crop water stress index (CWSI) is a cost-effective, non-destructive, and simple technique for tracking the start of crop water stress. This study investigated the feasibility of CWSI derived from canopy temperature to detect the water status of wheat crops. Artificial intelligence (AI) techniques have become increasingly popular in recent years for determining CWSI. In this study, the performance of two AI techniques, adaptive neuro-fuzzy inference system (ANFIS) and self-organizing maps (SOM), are compared while determining the CWSI of paddy crops. Field experiments were conducted for varying irrigation water applications during two seasons in 2022 and 2023 at the irrigation field laboratory at the Civil Engineering Department, Indian Institute of Technology Roorkee, India. The ANFIS and SOM-simulated CWSI values were compared with the experimentally calculated CWSI (EP-CWSI). Multiple regression analysis was used to determine the upper and lower CWSI baselines. The upper CWSI baseline was found to be a function of crop height and wind speed, while the lower CWSI baseline was a function of crop height, air vapor pressure deficit, and wind speed. The performance of ANFIS and SOM were compared based on mean absolute error (MAE), mean bias error (MBE), root mean squared error (RMSE), index of agreement (d), Nash-Sutcliffe efficiency (NSE), and coefficient of correlation (R²). Both models successfully estimated the CWSI of the paddy crop with higher correlation coefficients and lower statistical errors. However, the ANFIS (R²=0.81, NSE=0.73, d=0.94, RMSE=0.04, MAE= 0.00-1.76 and MBE=-2.13-1.32) outperformed the SOM model (R²=0.77, NSE=0.68, d=0.90, RMSE=0.05, MAE= 0.00-2.13 and MBE=-2.29-1.45). Overall, the results suggest that ANFIS is a reliable tool for accurately determining CWSI in wheat crops compared to SOM.

Keywords: adaptive neuro-fuzzy inference system, canopy temperature, crop water stress index, self-organizing map, wheat

Procedia PDF Downloads 55
395 Novel Uses of Discarded Work Rolls of Cold Rolling Mills in Hot Strip Mill of Tata Steel India

Authors: Uday Shanker Goel, Vinay Vasant Mahashabde, Biswajit Ghosh, Arvind Jha, Amit Kumar, Sanjay Kumar Patel, Uma Shanker Pattanaik, Vinit Kumar Shah, Chaitanya Bhanu

Abstract:

Pinch rolls of the Hot Mills must possess resistance to wear, thermal stability, high thermal conductivity and through hardness. Conventionally, pinch rolls have been procured either as new ones or refurbished ones. Discarded Work Rolls from the Cold Mill were taken and machined inhouse at Tata Steel to be used subsequently as the bottom pinch rolls of the Hot Mill. The hardness of the scrapped work rolls from CRM is close to 55HRC and the typical composition is ( C - 0.8% , Mn - 0.40 % , Si - 0.40% , Cr - 3.5% , Mo - 0.5% & V - 0.1% ).The Innovation was the use of a roll which would otherwise have been otherwise discarded as scrap. Also, the innovation helped in using the scrapped roll which had better wear and heat resistance. In a conventional Pinch roil (Hardness 50 HRC and typical chemistry - C - 10% , Mo+Co+V+Nb ~ 5 % ) , Pick-up is a condition whereby foreign material becomes adhered to the surface of the pinch roll during service. The foreign material is usually adhered metal from the actual product being rolled. The main attributes of the weld overlay rolls are wear resistance and crack resistance. However, the weld overlay roll has a strong tendency for strip pick-up particularly in the area of bead overlap. However, the greatest disadvantage is the depth of weld deposit, which is less than half of the usable shell thickness in most mills. Because of this, the stainless rolls require re-welding on a routine basis. By providing a significantly cheaper in house and more robust alternative of the existing bottom pinch rolls , this innovation results in significant lower worries for the roll shop. Pinch rolls now don't have to be sent outside Jamshedpur for refurbishment or for procuring new ones. Scrapped rolls from adjacent Cold Mill are procured and sent for machining to our Machine Shop inside Tata Steel works in Jamshedpur. This is far more convenient than the older methodology. The idea is also being deployed to the other hot mills of Tata Steel. Multiple campaigns have been tried out at both down coilers of Hot Strip with significantly lower wear.

Keywords: hot rolling flat, cold mill work roll, hot strip pinch roll, strip surface

Procedia PDF Downloads 128
394 Hyperelastic Constitutive Modelling of the Male Pelvic System to Understand the Prostate Motion, Deformation and Neoplasms Location with the Influence of MRI-TRUS Fusion Biopsy

Authors: Muhammad Qasim, Dolors Puigjaner, Josep Maria López, Joan Herrero, Carme Olivé, Gerard Fortuny

Abstract:

Computational modeling of the human pelvis using the finite element (FE) method has become extremely important to understand the mechanics of prostate motion and deformation when transrectal ultrasound (TRUS) guided biopsy is performed. The number of reliable and validated hyperelastic constitutive FE models of the male pelvis region is limited, and given models did not precisely describe the anatomical behavior of pelvis organs, mainly of the prostate and its neoplasms location. The motion and deformation of the prostate during TRUS-guided biopsy makes it difficult to know the location of potential lesions in advance. When using this procedure, practitioners can only provide roughly estimations for the lesions locations. Consequently, multiple biopsy samples are required to target one single lesion. In this study, the whole pelvis model (comprised of the rectum, bladder, pelvic muscles, prostate transitional zone (TZ), and peripheral zone (PZ)) is used for the simulation results. An isotropic hyperelastic approach (Signorini model) was used for all the soft tissues except the vesical muscles. The vesical muscles are assumed to have a linear elastic behavior due to the lack of experimental data to determine the constants involved in hyperelastic models. The tissues and organ geometry is taken from the existing literature for 3D meshes. Then the biomechanical parameters were obtained under different testing techniques described in the literature. The acquired parametric values for uniaxial stress/strain data are used in the Signorini model to see the anatomical behavior of the pelvis model. The five mesh nodes in terms of small prostate lesions are selected prior to biopsy and each lesion’s final position is targeted when TRUS probe force of 30 N is applied at the inside rectum wall. Code_Aster open-source software is used for numerical simulations. Moreover, the overall effects of pelvis organ deformation were demonstrated when TRUS–guided biopsy is induced. The deformation of the prostate and neoplasms displacement showed that the appropriate material properties to organs altered the resulting lesion's migration parametrically. As a result, the distance traveled by these lesions ranged between 3.77 and 9.42 mm. The lesion displacement and organ deformation are compared and analyzed with our previous study in which we used linear elastic properties for all pelvic organs. Furthermore, the visual comparison of axial and sagittal slices are also compared, which is taken for Magnetic Resource Imaging (MRI) and TRUS images with our preliminary study.

Keywords: code-aster, magnetic resonance imaging, neoplasms, transrectal ultrasound, TRUS-guided biopsy

Procedia PDF Downloads 87
393 A Comprehensive Framework for Fraud Prevention and Customer Feedback Classification in E-Commerce

Authors: Samhita Mummadi, Sree Divya Nagalli, Harshini Vemuri, Saketh Charan Nakka, Sumesh K. J.

Abstract:

One of the most significant challenges faced by people in today’s digital era is an alarming increase in fraudulent activities on online platforms. The fascination with online shopping to avoid long queues in shopping malls, the availability of a variety of products, and home delivery of goods have paved the way for a rapid increase in vast online shopping platforms. This has had a major impact on increasing fraudulent activities as well. This loop of online shopping and transactions has paved the way for fraudulent users to commit fraud. For instance, consider a store that orders thousands of products all at once, but what’s fishy about this is the massive number of items purchased and their transactions turning out to be fraud, leading to a huge loss for the seller. Considering scenarios like these underscores the urgent need to introduce machine learning approaches to combat fraud in online shopping. By leveraging robust algorithms, namely KNN, Decision Trees, and Random Forest, which are highly effective in generating accurate results, this research endeavors to discern patterns indicative of fraudulent behavior within transactional data. Introducing a comprehensive solution to this problem in order to empower e-commerce administrators in timely fraud detection and prevention is the primary motive and the main focus. In addition to that, sentiment analysis is harnessed in the model so that the e-commerce admin can tailor to the customer’s and consumer’s concerns, feedback, and comments, allowing the admin to improve the user’s experience. The ultimate objective of this study is to ramp up online shopping platforms against fraud and ensure a safer shopping experience. This paper underscores a model accuracy of 84%. All the findings and observations that were noted during our work lay the groundwork for future advancements in the development of more resilient and adaptive fraud detection systems, which will become crucial as technologies continue to evolve.

Keywords: behavior analysis, feature selection, Fraudulent pattern recognition, imbalanced classification, transactional anomalies

Procedia PDF Downloads 26