Search results for: mean distance between failures
1793 Gender Estimation by Means of Quantitative Measurements of Foramen Magnum: An Analysis of CT Head Images
Authors: Thilini Hathurusinghe, Uthpalie Siriwardhana, W. M. Ediri Arachchi, Ranga Thudugala, Indeewari Herath, Gayani Senanayake
Abstract:
The foramen magnum is more prone to protect than other skeletal remains during high impact and severe disruptive injuries. Therefore, it is worthwhile to explore whether these measurements can be used to determine the human gender which is vital in forensic and anthropological studies. The idea was to find out the ability to use quantitative measurements of foramen magnum as an anatomical indicator for human gender estimation and to evaluate the gender-dependent variations of foramen magnum using quantitative measurements. Randomly selected 113 subjects who underwent CT head scans at Sri Jayawardhanapura General Hospital of Sri Lanka within a period of six months, were included in the study. The sample contained 58 males (48.76 ± 14.7 years old) and 55 females (47.04 ±15.9 years old). Maximum length of the foramen magnum (LFM), maximum width of the foramen magnum (WFM), minimum distance between occipital condyles (MnD) and maximum interior distance between occipital condyles (MxID) were measured. Further, AreaT and AreaR were also calculated. The gender was estimated using binomial logistic regression. The mean values of all explanatory variables (LFM, WFM, MnD, MxID, AreaT, and AreaR) were greater among male than female. All explanatory variables except MnD (p=0.669) were statistically significant (p < 0.05). Significant bivariate correlations were demonstrated by AreaT and AreaR with the explanatory variables. The results evidenced that WFM and MxID were the best measurements in predicting gender according to binomial logistic regression. The estimated model was: log (p/1-p) =10.391-0.136×MxID-0.231×WFM, where p is the probability of being a female. The classification accuracy given by the above model was 65.5%. The quantitative measurements of foramen magnum can be used as a reliable anatomical marker for human gender estimation in the Sri Lankan context.Keywords: foramen magnum, forensic and anthropological studies, gender estimation, logistic regression
Procedia PDF Downloads 1511792 On-Line Data-Driven Multivariate Statistical Prediction Approach to Production Monitoring
Authors: Hyun-Woo Cho
Abstract:
Detection of incipient abnormal events in production processes is important to improve safety and reliability of manufacturing operations and reduce losses caused by failures. The construction of calibration models for predicting faulty conditions is quite essential in making decisions on when to perform preventive maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of process measurement data. The calibration model is used to predict faulty conditions from historical reference data. This approach utilizes variable selection techniques, and the predictive performance of several prediction methods are evaluated using real data. The results shows that the calibration model based on supervised probabilistic model yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.Keywords: calibration model, monitoring, quality improvement, feature selection
Procedia PDF Downloads 3571791 Dynamic and Thermal Characteristics of Three-Dimensional Turbulent Offset Jet
Authors: Ali Assoudi, Sabra Habli, Nejla Mahjoub Saïd, Philippe Bournot, Georges Le Palec
Abstract:
Studying the flow characteristics of a turbulent offset jet is an important topic among researchers across the world because of its various engineering applications. Some of the common examples include: injection and carburetor systems, entrainment and mixing process in gas turbine and boiler combustion chambers, Thrust-augmenting ejectors for V/STOL aircrafts and HVAC systems, environmental dischargers, film cooling and many others. An offset jet is formed when a jet discharges into a medium above a horizontal solid wall parallel to the axis of the jet exit but which is offset by a certain distance. The structure of a turbulent offset-jet can be described by three main regions. Close to the nozzle exit, an offset jet possesses characteristic features similar to those of free jets. Then, the entrainment of fluid between the jet, the offset wall and the bottom wall creates a low pressure zone, forcing the jet to deflect towards the wall and eventually attaches to it at the impingement point. This is referred to as the Coanda effect. Further downstream after the reattachment point, the offset jet has the characteristics of a wall jet flow. Therefore, the offset jet has characteristics of free, impingement and wall jets, and it is relatively more complex compared to these types of flows. The present study examines the dynamic and thermal evolution of a 3D turbulent offset jet with different offset height ratio (the ratio of the distance from the jet exit to the impingement bottom wall and the jet nozzle diameter). To achieve this purpose a numerical study was conducted to investigate a three-dimensional offset jet flow through the resolution of the different governing Navier–Stokes’ equations by means of the finite volume method and the RSM second-order turbulent closure model. A detailed discussion has been provided on the flow and thermal characteristics in the form of streamlines, mean velocity vector, pressure field and Reynolds stresses.Keywords: offset jet, offset ratio, numerical simulation, RSM
Procedia PDF Downloads 3041790 Benefits of Automobile Electronic Technology in the Logistics Industry in Third World Countries
Authors: Jonathan Matyenyika
Abstract:
In recent years, automobile manufacturers have increasingly produced vehicles equipped with cutting-edge automotive electronic technology to match the fast-paced digital world of today; this has brought about various benefits in different business sectors that make use of these vehicles as a means of turning over a profit. In the logistics industry, vehicles equipped with this technology have proved to be very utilitarian; this paper focuses on the benefits automobile electronic equipped vehicles have in the logistics industry. Automotive vehicle manufacturers have introduced new technological electronic features to their vehicles to enhance and improve the overall performance, efficiency, safety and driver comfort. Some of these features have proved to be beneficial to logistics operators. To start with the introduction of adaptive cruise control in long-distance haulage vehicles, to see how this system benefits the drivers, we carried out research in the form of interviews with long-distance truck drivers with the main question being, what major difference have they experienced since they started to operate vehicles equipped with this technology to which most stated they had noticed that they are less tired and are able to drive longer distances as compared to when they used vehicles not equipped with this system. As a result, they can deliver faster and take on the next assignment, thus improving efficiency and bringing in more monetary return for the logistics company. Secondly, the introduction of electric hybrid technology, this system allows the vehicle to be propelled by electric power stored in batteries located in the vehicle instead of fossil fuel. Consequently, this benefits the logistic company as vehicles become cheaper to run as electricity is more affordable as compared to fossil fuel. The merging of electronic systems in vehicles has proved to be of great benefit, as my research proves that this can benefit the logistics industry in plenty of ways.Keywords: logistics, manufacturing, hybrid technology, haulage vehicles
Procedia PDF Downloads 591789 A Review of Current Practices in Tattooing of Colonic Lesion at Endoscopy
Authors: Dhanashree Moghe, Roberta Bullingham, Rizwan Ahmed, Tarun Singhal
Abstract:
Aim: The NHS Bowel Screening Programme recommends the use of endoscopic tattooing for suspected malignant lesions that later require surgical or endoscopic localisation, using local protocols as guidance. This is in accordance with guidance from the BSG (The British Society of Gastroenterologists). We used a well-recognised local protocol as a standard to audit current tattooing practice in a large district general hospital with no current local guidelines. Method: A retrospective quantitative analysis of 50 patients who underwent segmental colonic resection for cancer over a 6-month period in 2021. We reviewed historic electronic endoscopy reports recording relevant data on tattoo indication and placement. Secondly, we carried out an anonymous survey of 16 independent lower GI endoscopists on self-reported details of their practice. Results: In our study, 28 patients (56%) had a tattoo placed at the time of their colonoscopy. Of these, only 53% (n=15) had the tattoo distal to the lesion, with the measured distance of the tattoo from the lesion only being documented in 8 reports. Only seven patients (25%) had a circumferential (4 quadrant) placement of the tattoo. 13 patients had lesions either in the caecum or rectum, locations deemed unnecessary as per BSG guidelines. Of the survey responses collected, there were four different protocols being used to guide practice. Only 50% of respondents placed tattoos at the correct distance from the lesion, and 83% placed the correct number of tattoos. Conclusion: There is a lack of standardisation of practices in colonic tattooing demonstrated in our study with incomplete compliance to our standard. The inadequate documentation of tattoo location can contribute to confusion and inaccuracy in the intraoperative localisation of lesions. This has the potential to increase operation length and morbidity. There is a need to standardise both technique and documentation in colonoscopic tattooing practice.Keywords: colorectal cancer, endoscopic tattooing, colonoscopy, NHS BSCP
Procedia PDF Downloads 1201788 Challenges Brought about by Integrating Multiple Stakeholders into Farm Management Mentorship of Land Reform Beneficiaries in South Africa
Authors: Carlu Van Der Westhuizen
Abstract:
The South African Agricultural Sector is of major socio-economic importance to the country due to its contribution in maintaining stability in food production and food security, providing labour opportunities, eradicating poverty and earning foreign currency. Against this reality, this paper investigates within the Agricultural Sector in South Africa the changes in Land Policies that the new democratically elected government (African National Congress) brought about since their takeover in 1994. The change in the agricultural environment is decidedly dualistic, with 1) a commercial sector, and 2) a subsistence and emerging farmer sector. The future demands and challenges are mostly identified as those of land redistribution and social upliftment. Opportunities that arose from the challenge of change are, among others, the small-holder participation in the value chain, while the challenge of change in Agriculture and the opportunities that were identified could serve as a yardstick against which the Sectors’ (Agriculture) Performance could be measured in future. Unfortunately, despite all Governments’ Policies, Programmes and Projects and inputs of the Private Sector, the outcomes are, to a large extend, unsuccessful. The urgency with the Land Redistribution Programme is that, for the period 1994 – 2014, only 7.5% of the 30% aim in the redistribution of land was achieved. Another serious aspect of concern is that 90% of the Land Redistribution Projects are not in a state of productive use by emerging farmers. Several reasons may be offered for these failures, amongst others the uncoordinated way in which different stakeholders are involved in a specific farming project. These stakeholders could generally in most cases be identified as: - The Government as the policy maker; - The Private Sector that has the potential to contribute to the sustainable pre- and post-settlement stages of the Programme by cooperating the supporting services to Government; - Inputs from the communities in rural areas where the settlement takes place; - The landowners as sellers of land (e.g. a Traditional Council); and - The emerging beneficiaries as the receivers of land. Mentorship is mostly the medium with which the support are coordinated. In this paper focus will be on three scenarios of different types of mentorship (or management support) namely: - The Taung Irrigation Scheme (TIS) where multiple new land beneficiaries were established by sharing irrigation pivots and receiving mentorship support from commodity organisations within a traditional land sharing system; - Projects whereby the mentor is a strategic partner (mostly a major agricultural 'cooperative' which is also providing inputs to the farmer and responsible for purchasing/marketing all commodities produced); and - An individual mentor who is a private person focussing mainly on farm management mentorship without direct gain other than a monthly stipend paid to the mentor by Government. Against this introduction the focus of the study is investigating the process for the sustainable implementation of Governments’ Land Redistribution in South African Agriculture. To achieve this, the research paper is presented under the themes of problem statement, objectives, methodology and limitations, outline of the research process, as well as proposing possible solutions.Keywords: land reform, role-players, failures, mentorship, management models
Procedia PDF Downloads 2711787 The Effect of Adhesion on the Frictional Hysteresis Loops at a Rough Interface
Authors: M. Bazrafshan, M. B. de Rooij, D. J. Schipper
Abstract:
Frictional hysteresis is the phenomenon in which mechanical contacts are subject to small (compared to contact area) oscillating tangential displacements. In the presence of adhesion at the interface, the contact repulsive force increases leading to a higher static friction force and pre-sliding displacement. This paper proposes a boundary element model (BEM) for the adhesive frictional hysteresis contact at the interface of two contacting bodies of arbitrary geometries. In this model, adhesion is represented by means of a Dugdale approximation of the total work of adhesion at local areas with a very small gap between the two bodies. The frictional contact is divided into sticking and slipping regions in order to take into account the transition from stick to slip (pre-sliding regime). In the pre-sliding regime, the stick and slip regions are defined based on the local values of shear stress and normal pressure. In the studied cases, a fixed normal force is applied to the interface and the friction force varies in such a way to start gross sliding in one direction reciprocally. For the first case, the problem is solved at the smooth interface between a ball and a flat for different values of work of adhesion. It is shown that as the work of adhesion increases, both static friction and pre-sliding distance increase due to the increase in the contact repulsive force. For the second case, the rough interface between a glass ball against a silicon wafer and a DLC (Diamond-Like Carbon) coating is considered. The work of adhesion is assumed to be identical for both interfaces. As adhesion depends on the interface roughness, the corresponding contact repulsive force is different for these interfaces. For the smoother interface, a larger contact repulsive force and consequently, a larger static friction force and pre-sliding distance are observed.Keywords: boundary element model, frictional hysteresis, adhesion, roughness, pre-sliding
Procedia PDF Downloads 1681786 Critical Heights of Sloped Unsupported Trenches in Unsaturated Sand
Authors: Won Taek Oh, Adin Richard
Abstract:
Workers are often required to enter unsupported trenches during the construction process, which may present serious risks. Trench failures can result in death or damage to adjacent properties, therefore trenches should be excavated with extreme precaution. Excavation work is often done in unsaturated soils, where the critical height (i.e. maximum depth that can be excavated without failure) of unsupported trenches can be more reliably estimated by considering the influence of matric suction. In this study, coupled stress/pore-water pressure analyses are conducted to investigate the critical height of sloped unsupported trenches considering the influence of pore-water pressure redistribution caused by excavating. Four different wall slopes (1.5V:1H, 2V:1H, 3V:1H, and 90°) and a vertical trench with the top 0.3 m sloped 1:1 were considered in the analyses with multiple depths of the ground water table in a sand. For comparison, the critical heights were also estimated using the limit equilibrium method for the same excavation scenarios used in the coupled analyses.Keywords: critical height, matric suction, unsaturated soil, unsupported trench
Procedia PDF Downloads 1211785 Development of Ceramic Spheres Buoyancy Modules for Deep-Sea Oil Exploration
Authors: G. Blugan, B. Jiang, J. Thornberry, P. Sturzenegger, U. Gonzenbach, M. Misson, D. Cartlidge, R. Stenerud, J. Kuebler
Abstract:
Low-cost ceramic spheres were developed and manufactured from the engineering ceramic aluminium oxide. Hollow spheres of 50 mm diameter with a wall thickness of 0.5-1.0 mm were produced via an adapted slip casting technique. It was possible to produce the spheres with good repeatability and with no defects or failures in the spheres due to the manufacturing process. The spheres were developed specifically for use in buoyancy devices for deep-sea exploration conditions at depths of 3000 m below sea level. The spheres with a 1.0 mm wall thickness exhibit a buoyancy of over 54% while the spheres with a 0.5 mm wall thickness exhibit a buoyancy of over 73%. The mechanical performance of the spheres was confirmed by performing a hydraulic burst pressure test on individual spheres. With a safety factor of 3, all spheres with 1.0 mm wall thickness survived a hydraulic pressure of greater than 150 MPa which is equivalent to a depth of more than 5000 m below sea level. The spheres were then incorporated into a buoyancy module. These hollow aluminium oxide ceramic spheres offer an excellent possibility of deep-sea exploration to depths greater than the currently used technology.Keywords: buoyancy, ceramic spheres, deep-sea, oil exploration
Procedia PDF Downloads 4161784 Availability Analysis of Milling System in a Rice Milling Plant
Authors: P. C. Tewari, Parveen Kumar
Abstract:
The paper describes the availability analysis of milling system of a rice milling plant using probabilistic approach. The subsystems under study are special purpose machines. The availability analysis of the system is carried out to determine the effect of failure and repair rates of each subsystem on overall performance (i.e. steady state availability) of system concerned. Further, on the basis of effect of repair rates on the system availability, maintenance repair priorities have been suggested. The problem is formulated using Markov Birth-Death process taking exponential distribution for probable failures and repair rates. The first order differential equations associated with transition diagram are developed by using mnemonic rule. These equations are solved using normalizing conditions and recursive method to drive out the steady state availability expression of the system. The findings of the paper are presented and discussed with the plant personnel to adopt a suitable maintenance policy to increase the productivity of the rice milling plant.Keywords: availability modeling, Markov process, milling system, rice milling plant
Procedia PDF Downloads 2361783 Improving Access to Training for Parents of Children with Autism Spectrum Disorders through Telepractice: Parental Perception
Authors: Myriam Rousseau, Marie-Hélène Poulin, Suzie McKinnon, Jacinthe Bourassa
Abstract:
Context: There is a growing demand for effective training programs for parents of children with autism spectrum disorders. While traditional in-person training is effective, it can be difficult for some parents to participate due to distance, time, and cost. Telepractice, a form of distance education, could be a viable alternative to address these challenges. Research objective: The objective of this study is to explore the experiences of parents of children with autism who participated in a training program offered by telepractice in order to document: 1) the experience of parents who participated in a program telepractice training program for autistic children, 2) parental satisfaction with the telepractice modality, and 3) potential benefits of using telepractice to deliver training programs to parents of autistic children. Method: This study followed a qualitative research design, and Braun and Clarke's six-step procedure was used for the thematic analysis of the comments provided by parents. Data were collected through individual interviews with parents who participated in the project. The analysis focused on identifying patterns and themes in the comments in order to better understand parents' experiences with the telepractice modality. Results: The study revealed that parents were generally satisfied with the telepractice modality, as it was easy to use and enabled a better balance between work and family. This modality also enabled parents to share and receive mutual support. Despite the positive results, it is still relevant to offer training in different modalities to meet the different needs of parents. Conclusion: The study shows that parents of children with autism are generally satisfied with telepractice as a training modality. The results suggest that telepractice can be an effective alternative to traditional face-to-face training. The study highlights the importance of taking parents' needs and preferences into account when designing and implementing training programs.Keywords: parents, children, training, telepractice
Procedia PDF Downloads 1451782 Developing an Intelligent Table Tennis Ball Machine with Human Play Simulation for Technical Training
Authors: Chen-Chi An, Jun-Yi He, Cheng-Han Hsieh, Chen-Ching Ting
Abstract:
This research has successfully developed an intelligent table tennis ball machine with human play simulate all situations of human play to take the service. It is well known; an excellent ball machine can help the table tennis coach to provide more efficient teaching, also give players the good technical training and entertainment. An excellent ball machine should be able to service all balls based on human play simulation due to the conventional competitions are today all taken place for people. In this work, two counter-rotating wheels are used to service the balls, where changing the absolute rotating speeds of the two wheels and the differences of rotating speeds between the two wheels can adjust the struck forces and the rotating speeds of the ball. The relationships between the absolute rotating speed of the two wheels and the struck forces of the ball as well as the differences rotating speeds between the two wheels and the rotating speeds of the ball are experimentally determined for technical development. The outlet speed, the ejected distance, and the rotating speed of the ball were measured by changing the absolute rotating speeds of the two wheels in terms of a series of differences in rotating speed between the two wheels for calibration of the ball machine; where the outlet speed and the ejected distance of the ball were further converted to the struck forces of the ball. In process, the balls serviced by the intelligent ball machine were based on the received calibration curves with help of the computer. Experiments technically used photosensitive devices to detect the outlet and rotating speed of the ball. Finally, this research developed some teaching programs for technical training using three ball machines and received more efficient training.Keywords: table tennis, ball machine, human play simulation, counter-rotating wheels
Procedia PDF Downloads 4341781 Maximization of Lifetime for Wireless Sensor Networks Based on Energy Efficient Clustering Algorithm
Authors: Frodouard Minani
Abstract:
Since last decade, wireless sensor networks (WSNs) have been used in many areas like health care, agriculture, defense, military, disaster hit areas and so on. Wireless Sensor Networks consist of a Base Station (BS) and more number of wireless sensors in order to monitor temperature, pressure, motion in different environment conditions. The key parameter that plays a major role in designing a protocol for Wireless Sensor Networks is energy efficiency which is a scarcest resource of sensor nodes and it determines the lifetime of sensor nodes. Maximizing sensor node’s lifetime is an important issue in the design of applications and protocols for Wireless Sensor Networks. Clustering sensor nodes mechanism is an effective topology control approach for helping to achieve the goal of this research. In this paper, the researcher presents an energy efficiency protocol to prolong the network lifetime based on Energy efficient clustering algorithm. The Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing protocol for clusters which is used to lower the energy consumption and also to improve the lifetime of the Wireless Sensor Networks. Maximizing energy dissipation and network lifetime are important matters in the design of applications and protocols for wireless sensor networks. Proposed system is to maximize the lifetime of the Wireless Sensor Networks by choosing the farthest cluster head (CH) instead of the closest CH and forming the cluster by considering the following parameter metrics such as Node’s density, residual-energy and distance between clusters (inter-cluster distance). In this paper, comparisons between the proposed protocol and comparative protocols in different scenarios have been done and the simulation results showed that the proposed protocol performs well over other comparative protocols in various scenarios.Keywords: base station, clustering algorithm, energy efficient, sensors, wireless sensor networks
Procedia PDF Downloads 1461780 The Lubrication Regimes Recognition of a Pressure-Fed Journal Bearing by Time and Frequency Domain Analysis of Acoustic Emission Signals
Authors: S. Hosseini, M. Ahmadi Najafabadi, M. Akhlaghi
Abstract:
The health of the journal bearings is very important in preventing unforeseen breakdowns in rotary machines, and poor lubrication is one of the most important factors for producing the bearing failures. Hydrodynamic lubrication (HL), mixed lubrication (ML), and boundary lubrication (BL) are three regimes of a journal bearing lubrication. This paper uses acoustic emission (AE) measurement technique to correlate features of the AE signals to the three lubrication regimes. The transitions from HL to ML based on operating factors such as rotating speed, load, inlet oil pressure by time domain and time-frequency domain signal analysis techniques are detected, and then metal-to-metal contacts between sliding surfaces of the journal and bearing are identified. It is found that there is a significant difference between theoretical and experimental operating values that are obtained for defining the lubrication regions.Keywords: acoustic emission technique, pressure fed journal bearing, time and frequency signal analysis, metal-to-metal contact
Procedia PDF Downloads 1551779 Application of a Confirmatory Composite Model for Assessing the Extent of Agricultural Digitalization: A Case of Proactive Land Acquisition Strategy (PLAS) Farmers in South Africa
Authors: Mazwane S., Makhura M. N., Ginege A.
Abstract:
Digitalization in South Africa has received considerable attention from policymakers. The support for the development of the digital economy by the South African government has been demonstrated through the enactment of various national policies and strategies. This study sought to develop an index for agricultural digitalization by applying composite confirmatory analysis (CCA). Another aim was to determine the factors that affect the development of digitalization in PLAS farms. Data on the indicators of the three dimensions of digitalization were collected from 300 Proactive Land Acquisition Strategy (PLAS) farms in South Africa using semi-structured questionnaires. Confirmatory composite analysis (CCA) was employed to reduce the items into three digitalization dimensions and ultimately to a digitalization index. Standardized digitalization index scores were extracted and fitted to a linear regression model to determine the factors affecting digitalization development. The results revealed that the model shows practical validity and can be used to measure digitalization development as measures of fit (geodesic distance, standardized root mean square residual, and squared Euclidean distance) were all below their respective 95%quantiles of bootstrap discrepancies (HI95 values). Therefore, digitalization is an emergent variable that can be measured using CCA. The average level of digitalization in PLAS farms was 0.2 and varied significantly across provinces. The factors that significantly influence digitalization development in PLAS land reform farms were age, gender, farm type, network type, and cellular data type. This should enable researchers and policymakers to understand the level of digitalization and patterns of development, as well as correctly attribute digitalization development to the contributing factors.Keywords: agriculture, digitalization, confirmatory composite model, land reform, proactive land acquisition strategy, South Africa
Procedia PDF Downloads 651778 Local Directional Encoded Derivative Binary Pattern Based Coral Image Classification Using Weighted Distance Gray Wolf Optimization Algorithm
Authors: Annalakshmi G., Sakthivel Murugan S.
Abstract:
This paper presents a local directional encoded derivative binary pattern (LDEDBP) feature extraction method that can be applied for the classification of submarine coral reef images. The classification of coral reef images using texture features is difficult due to the dissimilarities in class samples. In coral reef image classification, texture features are extracted using the proposed method called local directional encoded derivative binary pattern (LDEDBP). The proposed approach extracts the complete structural arrangement of the local region using local binary batten (LBP) and also extracts the edge information using local directional pattern (LDP) from the edge response available in a particular region, thereby achieving extra discriminative feature value. Typically the LDP extracts the edge details in all eight directions. The process of integrating edge responses along with the local binary pattern achieves a more robust texture descriptor than the other descriptors used in texture feature extraction methods. Finally, the proposed technique is applied to an extreme learning machine (ELM) method with a meta-heuristic algorithm known as weighted distance grey wolf optimizer (GWO) to optimize the input weight and biases of single-hidden-layer feed-forward neural networks (SLFN). In the empirical results, ELM-WDGWO demonstrated their better performance in terms of accuracy on all coral datasets, namely RSMAS, EILAT, EILAT2, and MLC, compared with other state-of-the-art algorithms. The proposed method achieves the highest overall classification accuracy of 94% compared to the other state of art methods.Keywords: feature extraction, local directional pattern, ELM classifier, GWO optimization
Procedia PDF Downloads 1641777 Correlation between Cephalometric Measurements and Visual Perception of Facial Profile in Skeletal Type II Patients
Authors: Choki, Supatchai Boonpratham, Suwannee Luppanapornlarp
Abstract:
The objective of this study was to find a correlation between cephalometric measurements and visual perception of facial profile in skeletal type II patients. In this study, 250 lateral cephalograms of female patients from age, 20 to 22 years were analyzed. The profile outlines of all the samples were hand traced and transformed into silhouettes by the principal investigator. Profile ratings were done by 9 orthodontists on Visual Analogue Scale from score one to ten (increasing level of convexity). 37 hard issue and soft tissue cephalometric measurements were analyzed by the principal investigator. All the measurements were repeated after 2 weeks interval for error assessment. At last, the rankings of visual perceptions were correlated with cephalometric measurements using Spearman correlation coefficient (P < 0.05). The results show that the increase in facial convexity was correlated with higher values of ANB (A point, nasion and B point), AF-BF (distance from A point to B point in mm), L1-NB (distance from lower incisor to NB line in mm), anterior maxillary alveolar height, posterior maxillary alveolar height, overjet, H angle hard tissue, H angle soft tissue and lower lip to E plane (absolute correlation values from 0.277 to 0.711). In contrast, the increase in facial convexity was correlated with lower values of Pg. to N perpendicular and Pg. to NB (mm) (absolute correlation value -0.302 and -0.294 respectively). From the soft tissue measurements, H angles had a higher correlation with visual perception than facial contour angle, nasolabial angle, and lower lip to E plane. In conclusion, the findings of this study indicated that the correlation of cephalometric measurements with visual perception was less than expected. Only 29% of cephalometric measurements had a significant correlation with visual perception. Therefore, diagnosis based solely on cephalometric analysis can result in failure to meet the patient’s esthetic expectation.Keywords: cephalometric measurements, facial profile, skeletal type II, visual perception
Procedia PDF Downloads 1381776 Numerical Investigation of Wave Run-Up on Curved Dikes
Authors: Suba Periyal Subramaniam, Babette Scheres, Altomare Corrado, Holger Schuttrumpf
Abstract:
Due to the climatic change and the usage of coastal areas, there is an increasing risk of dike failures along the coast worldwide. Wave run-up plays a key role in planning and design of a coastal structure. The coastal dike lines are bent either due to geological characteristics or due to influence of anthropogenic activities. The effect of the curvature of coastal dikes on wave run-up and overtopping is not yet investigated. The scope of this research is to find the effects of the dike curvature on wave run-up by employing numerical model studies for various dike opening angles. Numerical simulation is carried out using DualSPHysics, a meshless method, and OpenFOAM, a mesh-based method. The numerical results of the wave run-up on a curved dike and the wave transformation process for various opening angles, wave attacks, and wave parameters will be compared and discussed. This research aims to contribute a more precise analysis and understanding the influence of the curvature in the dike line and thus ensuring a higher level of protection in the future development of coastal structures.Keywords: curved dikes, DualSPHysics, OpenFOAM, wave run-up
Procedia PDF Downloads 1491775 Self-Supervised Learning for Hate-Speech Identification
Authors: Shrabani Ghosh
Abstract:
Automatic offensive language detection in social media has become a stirring task in today's NLP. Manual Offensive language detection is tedious and laborious work where automatic methods based on machine learning are only alternatives. Previous works have done sentiment analysis over social media in different ways such as supervised, semi-supervised, and unsupervised manner. Domain adaptation in a semi-supervised way has also been explored in NLP, where the source domain and the target domain are different. In domain adaptation, the source domain usually has a large amount of labeled data, while only a limited amount of labeled data is available in the target domain. Pretrained transformers like BERT, RoBERTa models are fine-tuned to perform text classification in an unsupervised manner to perform further pre-train masked language modeling (MLM) tasks. In previous work, hate speech detection has been explored in Gab.ai, which is a free speech platform described as a platform of extremist in varying degrees in online social media. In domain adaptation process, Twitter data is used as the source domain, and Gab data is used as the target domain. The performance of domain adaptation also depends on the cross-domain similarity. Different distance measure methods such as L2 distance, cosine distance, Maximum Mean Discrepancy (MMD), Fisher Linear Discriminant (FLD), and CORAL have been used to estimate domain similarity. Certainly, in-domain distances are small, and between-domain distances are expected to be large. The previous work finding shows that pretrain masked language model (MLM) fine-tuned with a mixture of posts of source and target domain gives higher accuracy. However, in-domain performance of the hate classifier on Twitter data accuracy is 71.78%, and out-of-domain performance of the hate classifier on Gab data goes down to 56.53%. Recently self-supervised learning got a lot of attention as it is more applicable when labeled data are scarce. Few works have already been explored to apply self-supervised learning on NLP tasks such as sentiment classification. Self-supervised language representation model ALBERTA focuses on modeling inter-sentence coherence and helps downstream tasks with multi-sentence inputs. Self-supervised attention learning approach shows better performance as it exploits extracted context word in the training process. In this work, a self-supervised attention mechanism has been proposed to detect hate speech on Gab.ai. This framework initially classifies the Gab dataset in an attention-based self-supervised manner. On the next step, a semi-supervised classifier trained on the combination of labeled data from the first step and unlabeled data. The performance of the proposed framework will be compared with the results described earlier and also with optimized outcomes obtained from different optimization techniques.Keywords: attention learning, language model, offensive language detection, self-supervised learning
Procedia PDF Downloads 1071774 Mechanical Simulation with Electrical and Dimensional Tests for AISHa Containment Chamber
Authors: F. Noto, G. Costa, L. Celona, F. Chines, G. Ciavola, G. Cuttone, S. Gammino, O. Leonardi, S. Marletta, G. Torrisi
Abstract:
At Istituto Nazionale di Fisica Nucleare – Laboratorio Nazionale del Sud (INFN-LNS), a broad experience in the design, construction and commissioning of ECR and microwave ion sources is available. The AISHa ion source has been designed by taking into account the typical requirements of hospital-based facilities, where the minimization of the mean time between failures (MTBF) is a key point together with the maintenance operations, which should be fast and easy. It is intended to be a multipurpose device, operating at 18 GHz, in order to achieve higher plasma densities. It should provide enough versatility for future needs of the hadron therapy, including the ability to run at larger microwave power to produce different species and highly charged ion beams. The source is potentially interesting for any hadron therapy facility using heavy ions. In this paper, we analyzed the dimensional test and electrical test about an innovative solution for the containment chamber that allows us to solve our isolation and structural problems.Keywords: FEM analysis, electron cyclotron resonance ion source, dielectrical measurement, hadron therapy
Procedia PDF Downloads 2931773 Does Stock Markets Asymmetric Information Affect Foreign Capital Flows?
Authors: Farid Habibi Tanha, Mojtaba Jahanbazi, Morteza Foroutan, Rasidah Mohd Rashid
Abstract:
This paper depicts the effects of asymmetric information in determining capital inflows to be captured through stock market microstructure. The model can explain several stylized facts regarding the capital immobility. The first phase of the research involves in collecting and refining 150,000,000 daily data of 11 stock markets over a period of one decade in an effort to minimize the impact of survivorship bias. Three micro techniques were used to measure information asymmetries. The final phase analyzes the model through panel data approach. As a unique contribution, this research will provide valuable information regarding negative effects of information asymmetries in stock markets on attracting foreign investments. The results of this study can be directly considered by policy makers to monitor and control changes of capital flow in order to keep market conditions in a healthy manner, by preventing and managing possible shocks to avoid sudden reversals and market failures.Keywords: asymmetric information, capital inflow, market microstructure, investment
Procedia PDF Downloads 3221772 Study on Water Level Management Criteria of Reservoir Failure Alert System
Authors: B. Lee, B. H. Choi
Abstract:
The loss of safety for reservoirs brought about by climate change and facility aging leads to reservoir failures, which results in the loss of lives and property damage in downstream areas. Therefore, it is necessary to provide a reservoir failure alert system for downstream residents to detect the early signs of failure (with sensors) in real-time and perform safety management to prevent and minimize possible damage. 10 case studies were carried out to verify the water level management criteria of four levels (attention, caution, alert, serious). Peak changes in water level data were analysed. The results showed that ‘Caution’ and ‘Alert’ were closed to 33% and 66% of difference in level between flood water level and full water level. Therefore, it is adequate to use initial water level management criteria of reservoir failure alert system for the first year. Acknowledgment: This research was supported by a grant (2017-MPSS31-002) from 'Supporting Technology Development Program for Disaster Management' funded by the Ministry of the Interior and Safety(MOIS)Keywords: alert system, management criteria, reservoir failure, sensor
Procedia PDF Downloads 2011771 Robust ANOVA: An Illustrative Study in Horticultural Crop Research
Authors: Dinesh Inamadar, R. Venugopalan, K. Padmini
Abstract:
An attempt has been made in the present communication to elucidate the efficacy of robust ANOVA methods to analyze horticultural field experimental data in the presence of outliers. Results obtained fortify the use of robust ANOVA methods as there was substantiate reduction in error mean square, and hence the probability of committing Type I error, as compared to the regular approach.Keywords: outliers, robust ANOVA, horticulture, cook distance, type I error
Procedia PDF Downloads 3911770 Analysis of Fault Tolerance on Grid Computing in Real Time Approach
Authors: Parampal Kaur, Deepak Aggarwal
Abstract:
In the computational Grid, fault tolerance is an imperative issue to be considered during job scheduling. Due to the widespread use of resources, systems are highly prone to errors and failures. Hence, fault tolerance plays a key role in the grid to avoid the problem of unreliability. Scheduling the task to the appropriate resource is a vital requirement in computational Grid. The fittest resource scheduling algorithm searches for the appropriate resource based on the job requirements, in contrary to the general scheduling algorithms where jobs are scheduled to the resources with best performance factor. The proposed method is to improve the fault tolerance of the fittest resource scheduling algorithm by scheduling the job in coordination with job replication when the resource has low reliability. Based on the reliability index of the resource, the resource is identified as critical. The tasks are scheduled based on the criticality of the resources. Results show that the execution time of the tasks is comparatively reduced with the proposed algorithm using real-time approach rather than a simulator.Keywords: computational grid, fault tolerance, task replication, job scheduling
Procedia PDF Downloads 4361769 A Design Method for Wind Turbine Blade to Have Uniform Strength and Optimum Power Generation Performance
Authors: Pengfei Liu, Yiyi Xu
Abstract:
There have been substantial incidents of wind turbine blade fractures and failures due to the lack of systematic blade strength design method incorporated with the aerodynamic forces and power generation efficiency. This research was to develop a methodology and procedure for the wind turbine rotor blade strength taking into account the strength, integration, and aerodynamic performance in terms of power generation efficiency. The wind turbine blade designed using this method and procedure will have a uniform strength across the span to save unnecessary thickness in many blade radial locations and yet to maintain the optimum power generation performance. A turbine rotor code, taking into account both aerodynamic and structural properties, was developed. An existing wind turbine blade was used as an example. For a condition of extreme wind speed of 100 km per hour, the design reduced about 19% of material usage while maintaining the optimum power regeneration efficiency.Keywords: renewable energy, wind turbine, turbine blade strength, aerodynamics-strength coupled optimization
Procedia PDF Downloads 1791768 Probabilistic Graphical Model for the Web
Authors: M. Nekri, A. Khelladi
Abstract:
The world wide web network is a network with a complex topology, the main properties of which are the distribution of degrees in power law, A low clustering coefficient and a weak average distance. Modeling the web as a graph allows locating the information in little time and consequently offering a help in the construction of the research engine. Here, we present a model based on the already existing probabilistic graphs with all the aforesaid characteristics. This work will consist in studying the web in order to know its structuring thus it will enable us to modelize it more easily and propose a possible algorithm for its exploration.Keywords: clustering coefficient, preferential attachment, small world, web community
Procedia PDF Downloads 2721767 Infrared Detection Device for Accurate Scanning 3D Objects
Authors: Evgeny A. Rybakov, Dmitry P. Starikov
Abstract:
This article contains information about creating special unit for scanning 3D objects different nature, different materials, for example plastic, plaster, cardboard, wood, metal and etc. The main part of the unit is infrared transducer, which is sends the wave to the object and receive back wave for calculating distance. After that, microcontroller send to PC data, and computer program create model for printing from the plastic, gypsum, brass, etc.Keywords: clutch, infrared, microcontroller, plastic, shaft, stage
Procedia PDF Downloads 4441766 Digital Twin Platform for BDS-3 Satellite Navigation Using Digital Twin Intelligent Visualization Technology
Authors: Rundong Li, Peng Wu, Junfeng Zhang, Zhipeng Ren, Chen Yang, Jiahui Gan, Lu Feng, Haibo Tong, Xuemei Xiao, Yuying Chen
Abstract:
The research of Beidou-3 satellite navigation is on the rise, but in actual work, it is inevitable that satellite data is insecure, research and development is inefficient, and there is no ability to deal with failures in advance. Digital twin technology has obvious advantages in the simulation of life cycle models of aerospace satellite navigation products. In order to meet the increasing demand, this paper builds a Beidou-3 satellite navigation digital twin platform (BDSDTP). The basic establishment of BDSDTP was completed by establishing a digital twin double, Beidou-3 comprehensive digital twin design, predictive maintenance (PdM) mathematical model, and visual interaction design. Finally, this paper provides a time application case of the platform, which provides a reference for the application of BDSDTP in various fields of navigation and provides obvious help for extending the full cycle life of Beidou-3 satellite navigation.Keywords: BDS-3, digital twin, visualization, PdM
Procedia PDF Downloads 1441765 Gc-ms Data Integrated Chemometrics for the Authentication of Vegetable Oil Brands in Minna, Niger State, Nigeria
Authors: Rasaq Bolakale Salau, Maimuna Muhammad Abubakar, Jonathan Yisa, Muhammad Tauheed Bisiriyu, Jimoh Oladejo Tijani, Alexander Ifeanyi Ajai
Abstract:
Vegetables oils are widely consumed in Nigeria. This has led to competitive manufacture of various oil brands. This leads increasing tendencies for fraud, labelling misinformation and other unwholesome practices. A total of thirty samples including raw and corresponding branded samples of vegetable oils were collected. The Oils were extracted from raw ground nut, soya bean and oil palm fruits. The GC-MS data was subjected to chemometric techniques of PCA and HCA. The SOLO 8.7 version of the standalone chemometrics software developed by Eigenvector research incorporated and powered by PLS Toolbox was used. The GCMS fingerprint gave basis for discrimination as it reveals four predominant but unevenly distributed fatty acids: Hexadecanoic acid methyl ester (10.27- 45.21% PA), 9,12-octadecadienoic acid methyl ester (10.9 - 45.94% PA), 9-octadecenoic acid methyl ester (18.75 - 45.65%PA), and Eicosanoic acid methyl ester (1.19% - 6.29%PA). In PCA modelling, two PCs are retained at cumulative variance captured at 73.15%. The score plots indicated that palm oil brands are most aligned with raw palm oil. PCA loading plot reveals the signature retention times between 4.0 and 6.0 needed for quality assurance and authentication of the oils samples. They are of aromatic hydrocarbons, alcohols and aldehydes functional groups. HCA dendrogram which was modeled using Euclidian distance through Wards method, indicated co-equivalent samples. HCA revealed the pair of raw palm oil brand and palm oil brand in the closest neighbourhood (± 1.62 % A difference) based on variance weighted distance. It showed Palm olein brand to be most authentic. In conclusion, based on the GCMS data with chemometrics, the authenticity of the branded samples is ranked as: Palm oil > Soya oil > groundnut oil.Keywords: vegetable oil, authenticity, chemometrics, PCA, HCA, GC-MS
Procedia PDF Downloads 351764 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest
Authors: Peter Baji
Abstract:
In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study
Procedia PDF Downloads 200