Search results for: predictive accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4529

Search results for: predictive accuracy

2909 ANAC-id - Facial Recognition to Detect Fraud

Authors: Giovanna Borges Bottino, Luis Felipe Freitas do Nascimento Alves Teixeira

Abstract:

This article aims to present a case study of the National Civil Aviation Agency (ANAC) in Brazil, ANAC-id. ANAC-id is the artificial intelligence algorithm developed for image analysis that recognizes standard images of unobstructed and uprighted face without sunglasses, allowing to identify potential inconsistencies. It combines YOLO architecture and 3 libraries in python - face recognition, face comparison, and deep face, providing robust analysis with high level of accuracy.

Keywords: artificial intelligence, deepface, face compare, face recognition, YOLO, computer vision

Procedia PDF Downloads 156
2908 Brainwave Classification for Brain Balancing Index (BBI) via 3D EEG Model Using k-NN Technique

Authors: N. Fuad, M. N. Taib, R. Jailani, M. E. Marwan

Abstract:

In this paper, the comparison between k-Nearest Neighbor (kNN) algorithms for classifying the 3D EEG model in brain balancing is presented. The EEG signal recording was conducted on 51 healthy subjects. Development of 3D EEG models involves pre-processing of raw EEG signals and construction of spectrogram images. Then, maximum PSD values were extracted as features from the model. There are three indexes for the balanced brain; index 3, index 4 and index 5. There are significant different of the EEG signals due to the brain balancing index (BBI). Alpha-α (8–13 Hz) and beta-β (13–30 Hz) were used as input signals for the classification model. The k-NN classification result is 88.46% accuracy. These results proved that k-NN can be used in order to predict the brain balancing application.

Keywords: power spectral density, 3D EEG model, brain balancing, kNN

Procedia PDF Downloads 487
2907 A Two-Step Framework for Unsupervised Speaker Segmentation Using BIC and Artificial Neural Network

Authors: Ahmad Alwosheel, Ahmed Alqaraawi

Abstract:

This work proposes a new speaker segmentation approach for two speakers. It is an online approach that does not require a prior information about speaker models. It has two phases, a conventional approach such as unsupervised BIC-based is utilized in the first phase to detect speaker changes and train a Neural Network, while in the second phase, the output trained parameters from the Neural Network are used to predict next incoming audio stream. Using this approach, a comparable accuracy to similar BIC-based approaches is achieved with a significant improvement in terms of computation time.

Keywords: artificial neural network, diarization, speaker indexing, speaker segmentation

Procedia PDF Downloads 502
2906 Closing the Gap: Efficient Voxelization with Equidistant Scanlines and Gap Detection

Authors: S. Delgado, C. Cerrada, R. S. Gómez

Abstract:

This research introduces an approach to voxelizing the surfaces of triangular meshes with efficiency and accuracy. Our method leverages parallel equidistant scan-lines and introduces a Gap Detection technique to address the limitations of existing approaches. We present a comprehensive study showcasing the method's effectiveness, scalability, and versatility in different scenarios. Voxelization is a fundamental process in computer graphics and simulations, playing a pivotal role in applications ranging from scientific visualization to virtual reality. Our algorithm focuses on enhancing the voxelization process, especially for complex models and high resolutions. One of the major challenges in voxelization in the Graphics Processing Unit (GPU) is the high cost of discovering the same voxels multiple times. These repeated voxels incur in costly memory operations with no useful information. Our scan-line-based method ensures that each voxel is detected exactly once when processing the triangle, enhancing performance without compromising the quality of the voxelization. The heart of our approach lies in the use of parallel, equidistant scan-lines to traverse the interiors of triangles. This minimizes redundant memory operations and avoids revisiting the same voxels, resulting in a significant performance boost. Moreover, our method's computational efficiency is complemented by its simplicity and portability. Written as a single compute shader in Graphics Library Shader Language (GLSL), it is highly adaptable to various rendering pipelines and hardware configurations. To validate our method, we conducted extensive experiments on a diverse set of models from the Stanford repository. Our results demonstrate not only the algorithm's efficiency, but also its ability to produce 26 tunnel free accurate voxelizations. The Gap Detection technique successfully identifies and addresses gaps, ensuring consistent and visually pleasing voxelized surfaces. Furthermore, we introduce the Slope Consistency Value metric, quantifying the alignment of each triangle with its primary axis. This metric provides insights into the impact of triangle orientation on scan-line based voxelization methods. It also aids in understanding how the Gap Detection technique effectively improves results by targeting specific areas where simple scan-line-based methods might fail. Our research contributes to the field of voxelization by offering a robust and efficient approach that overcomes the limitations of existing methods. The Gap Detection technique fills a critical gap in the voxelization process. By addressing these gaps, our algorithm enhances the visual quality and accuracy of voxelized models, making it valuable for a wide range of applications. In conclusion, "Closing the Gap: Efficient Voxelization with Equidistant Scan-lines and Gap Detection" presents an effective solution to the challenges of voxelization. Our research combines computational efficiency, accuracy, and innovative techniques to elevate the quality of voxelized surfaces. With its adaptable nature and valuable innovations, this technique could have a positive influence on computer graphics and visualization.

Keywords: voxelization, GPU acceleration, computer graphics, compute shaders

Procedia PDF Downloads 72
2905 Environmental Performance Improvement of Additive Manufacturing Processes with Part Quality Point of View

Authors: Mazyar Yosofi, Olivier Kerbrat, Pascal Mognol

Abstract:

Life cycle assessment of additive manufacturing processes has evolved significantly since these past years. A lot of existing studies mainly focused on energy consumption. Nowadays, new methodologies of life cycle inventory acquisition came through the literature and help manufacturers to take into account all the input and output flows during the manufacturing step of the life cycle of products. Indeed, the environmental analysis of the phenomena that occur during the manufacturing step of additive manufacturing processes is going to be well known. Now it becomes possible to count and measure accurately all the inventory data during the manufacturing step. Optimization of the environmental performances of processes can now be considered. Environmental performance improvement can be made by varying process parameters. However, a lot of these parameters (such as manufacturing speed, the power of the energy source, quantity of support materials) affect directly the mechanical properties, surface finish and the dimensional accuracy of a functional part. This study aims to improve the environmental performance of an additive manufacturing process without deterioration of the part quality. For that purpose, the authors have developed a generic method that has been applied on multiple parts made by additive manufacturing processes. First, a complete analysis of the process parameters is made in order to identify which parameters affect only the environmental performances of the process. Then, multiple parts are manufactured by varying the identified parameters. The aim of the second step is to find the optimum value of the parameters that decrease significantly the environmental impact of the process and keep the part quality as desired. Finally, a comparison between the part made by initials parameters and changed parameters is made. In this study, the major finding claims by authors is to reduce the environmental impact of an additive manufacturing process while respecting the three quality criterion of parts, mechanical properties, dimensional accuracy and surface roughness. Now that additive manufacturing processes can be seen as mature from a technical point of view, environmental improvement of these processes can be considered while respecting the part properties. The first part of this study presents the methodology applied to multiple academic parts. Then, the validity of the methodology is demonstrated on functional parts.

Keywords: additive manufacturing, environmental impact, environmental improvement, mechanical properties

Procedia PDF Downloads 288
2904 An Electrocardiography Deep Learning Model to Detect Atrial Fibrillation on Clinical Application

Authors: Jui-Chien Hsieh

Abstract:

Background:12-lead electrocardiography(ECG) is one of frequently-used tools to detect atrial fibrillation (AF), which might degenerate into life-threaten stroke, in clinical Practice. Based on this study, the AF detection by the clinically-used 12-lead ECG device has only 0.73~0.77 positive predictive value (ppv). Objective: It is on great demand to develop a new algorithm to improve the precision of AF detection using 12-lead ECG. Due to the progress on artificial intelligence (AI), we develop an ECG deep model that has the ability to recognize AF patterns and reduce false-positive errors. Methods: In this study, (1) 570-sample 12-lead ECG reports whose computer interpretation by the ECG device was AF were collected as the training dataset. The ECG reports were interpreted by 2 senior cardiologists, and confirmed that the precision of AF detection by the ECG device is 0.73.; (2) 88 12-lead ECG reports whose computer interpretation generated by the ECG device was AF were used as test dataset. Cardiologist confirmed that 68 cases of 88 reports were AF, and others were not AF. The precision of AF detection by ECG device is about 0.77; (3) A parallel 4-layer 1 dimensional convolutional neural network (CNN) was developed to identify AF based on limb-lead ECGs and chest-lead ECGs. Results: The results indicated that this model has better performance on AF detection than traditional computer interpretation of the ECG device in 88 test samples with 0.94 ppv, 0.98 sensitivity, 0.80 specificity. Conclusions: As compared to the clinical ECG device, this AI ECG model promotes the precision of AF detection from 0.77 to 0.94, and can generate impacts on clinical applications.

Keywords: 12-lead ECG, atrial fibrillation, deep learning, convolutional neural network

Procedia PDF Downloads 114
2903 Predictive Value of Primary Tumor Depth for Cervical Lymphadenopathy in Squamous Cell Carcinoma of Buccal Mucosa

Authors: Zohra Salim

Abstract:

Objective: To access the relationship of primary tumor thickness with cervical lymphadenopathy in squamous cell carcinoma of buccal mucosa. Methodology: A cross-sectional observational study was carried out on 80 Patients with biopsy-proven oral squamous cell carcinoma of buccal mucosa at Dow University of Health Sciences. All the study participants were treated with wide local excision of the primary tumor with elective neck dissection. Patients with prior head and neck malignancy or those with prior radiotherapy or chemotherapy were excluded from the study. Data was entered and analyzed on SPSS 21. Chi-squared test with 95% C.I and 80% power of the test was used to evaluate the relationship of tumor depth with cervical lymph nodes. Results: 50 participants were male, and 30 patients were female. 30 patients were in the age range of 20-40 years, 36 patients in the range of 40-60 years, while 14 patients were beyond age 60 years. Tumor size ranged from 0.3cm to 5cm with a mean of 2.03cm. Tumor depth ranged from 0.2cm to 5cm. 20% of the participants reported with tumor depth greater than 2.5cm, while 80% of patients reported with tumor depth less than 2.5cm. Out of 80 patients, 27 reported with negative lymph nodes, while 53 patients reported with positive lymph nodes. Conclusion: Our study concludes that relationship exists between the depth of primary tumor and cervical lymphadenopathy in squamous cell carcinoma of buccal mucosa.

Keywords: squamous cell carcinoma, tumor depth, cervical lymphadenopathy, buccal mucosa

Procedia PDF Downloads 237
2902 SEAWIZARD-Multiplex AI-Enabled Graphene Based Lab-On-Chip Sensing Platform for Heavy Metal Ions Monitoring on Marine Water

Authors: M. Moreno, M. Alique, D. Otero, C. Delgado, P. Lacharmoise, L. Gracia, L. Pires, A. Moya

Abstract:

Marine environments are increasingly threatened by heavy metal contamination, including mercury (Hg), lead (Pb), and cadmium (Cd), posing significant risks to ecosystems and human health. Traditional monitoring techniques often fail to provide the spatial and temporal resolution needed for real-time detection of these contaminants, especially in remote or harsh environments. SEAWIZARD addresses these challenges by leveraging the flexibility, adaptability, and cost-effectiveness of printed electronics, with the integration of microfluidics to develop a compact, portable, and reusable sensor platform designed specifically for real-time monitoring of heavy metal ions in seawater. The SEAWIZARD sensor is a multiparametric Lab-on-Chip (LoC) device, a miniaturized system that integrates several laboratory functions into a single chip, drastically reducing sample volumes and improving adaptability. This platform integrates three printed graphene electrodes for the simultaneous detection of Hg, Cd and Pb via square wave voltammetry. These electrodes share the reference and the counter electrodes to improve space efficiency. Additionally, it integrates printed pH and temperature sensors to correct environmental interferences that may impact the accuracy of metal detection. The pH sensor is based on a carbon electrode with iridium oxide electrodeposited while the temperature sensor is graphene based. A protective dielectric layer is printed on top of the sensor to safeguard it in harsh marine conditions. The use of flexible polyethylene terephthalate (PET) as the substrate enables the sensor to conform to various surfaces and operate in challenging environments. One of the key innovations of SEAWIZARD is its integrated microfluidic layer, fabricated from cyclic olefin copolymer (COC). This microfluidic component allows a controlled flow of seawater over the sensing area, allowing for significant improved detection limits compared to direct water sampling. The system’s dual-channel design separates the detection of heavy metals from the measurement of pH and temperature, ensuring that each parameter is measured under optimal conditions. In addition, the temperature sensor is finely tuned with a serpentine-shaped microfluidic channel to ensure precise thermal measurements. SEAWIZARD also incorporates custom electronics that allow for wireless data transmission via Bluetooth, facilitating rapid data collection and user interface integration. Embedded artificial intelligence further enhances the platform by providing an automated alarm system, capable of detecting predefined metal concentration thresholds and issuing warnings when limits are exceeded. This predictive feature enables early warnings of potential environmental disasters, such as industrial spills or toxic levels of heavy metal pollutants, making SEAWIZARD not just a detection tool, but a comprehensive monitoring and early intervention system. In conclusion, SEAWIZARD represents a significant advancement in printed electronics applied to environmental sensing. By combining flexible, low-cost materials with advanced microfluidics, custom electronics, and AI-driven intelligence, SEAWIZARD offers a highly adaptable and scalable solution for real-time, high-resolution monitoring of heavy metals in marine environments. Its compact and portable design makes it an accessible, user-friendly tool with the potential to transform water quality monitoring practices and provide critical data to protect marine ecosystems from contamination-related risks.

Keywords: lab-on-chip, printed electronics, real-time monitoring, microfluidics, heavy metal contamination

Procedia PDF Downloads 30
2901 Firm Performance and Stock Price in Nigeria

Authors: Tijjani Bashir Musa

Abstract:

The recent global crisis which suddenly results to Nigerian stock market crash revealed some peculiarities of Nigerian firms. Some firms in Nigeria are performing but their stock prices are not increasing while some firms are at the brink of collapse but their stock prices are increasing. Thus, this study examines the relationship between firm performance and stock price in Nigeria. The study covered the period of 2005 to 2009. This period is the period of stock boom and also marked the period of stock market crash as a result of global financial meltdown. The study is a panel study. A total of 140 firms were sampled from 216 firms listed on the Nigerian Stock Exchange (NSE). Data were collected from secondary source. These data were divided into four strata comprising the most performing stock, the least performing stock, most performing firms and the least performing firms. Each stratum contains 35 firms with characteristic of most performing stock, most performing firms, least performing stock and least performing firms. Multiple linear regression models were used to analyse the data while statistical/econometrics package of Stata 11.0 version was used to run the data. The study found that, relationship exists between selected firm performance parameters (operating efficiency, firm profit, earning per share and working capital) and stock price. As such firm performance gave sufficient information or has predictive power on stock prices movements in Nigeria for all the years under study.. The study recommends among others that Managers of firms in Nigeria should formulate policies and exert effort geared towards improving firm performance that will enhance stock prices movements.

Keywords: firm, Nigeria, performance, stock price

Procedia PDF Downloads 477
2900 Bayesian Borrowing Methods for Count Data: Analysis of Incontinence Episodes in Patients with Overactive Bladder

Authors: Akalu Banbeta, Emmanuel Lesaffre, Reynaldo Martina, Joost Van Rosmalen

Abstract:

Including data from previous studies (historical data) in the analysis of the current study may reduce the sample size requirement and/or increase the power of analysis. The most common example is incorporating historical control data in the analysis of a current clinical trial. However, this only applies when the historical control dataare similar enough to the current control data. Recently, several Bayesian approaches for incorporating historical data have been proposed, such as the meta-analytic-predictive (MAP) prior and the modified power prior (MPP) both for single control as well as for multiple historical control arms. Here, we examine the performance of the MAP and the MPP approaches for the analysis of (over-dispersed) count data. To this end, we propose a computational method for the MPP approach for the Poisson and the negative binomial models. We conducted an extensive simulation study to assess the performance of Bayesian approaches. Additionally, we illustrate our approaches on an overactive bladder data set. For similar data across the control arms, the MPP approach outperformed the MAP approach with respect to thestatistical power. When the means across the control arms are different, the MPP yielded a slightly inflated type I error (TIE) rate, whereas the MAP did not. In contrast, when the dispersion parameters are different, the MAP gave an inflated TIE rate, whereas the MPP did not.We conclude that the MPP approach is more promising than the MAP approach for incorporating historical count data.

Keywords: count data, meta-analytic prior, negative binomial, poisson

Procedia PDF Downloads 117
2899 Identification of Hepatocellular Carcinoma Using Supervised Learning Algorithms

Authors: Sagri Sharma

Abstract:

Analysis of diseases integrating multi-factors increases the complexity of the problem and therefore, development of frameworks for the analysis of diseases is an issue that is currently a topic of intense research. Due to the inter-dependence of the various parameters, the use of traditional methodologies has not been very effective. Consequently, newer methodologies are being sought to deal with the problem. Supervised Learning Algorithms are commonly used for performing the prediction on previously unseen data. These algorithms are commonly used for applications in fields ranging from image analysis to protein structure and function prediction and they get trained using a known dataset to come up with a predictor model that generates reasonable predictions for the response to new data. Gene expression profiles generated by DNA analysis experiments can be quite complex since these experiments can involve hypotheses involving entire genomes. The application of well-known machine learning algorithm - Support Vector Machine - to analyze the expression levels of thousands of genes simultaneously in a timely, automated and cost effective way is thus used. The objectives to undertake the presented work are development of a methodology to identify genes relevant to Hepatocellular Carcinoma (HCC) from gene expression dataset utilizing supervised learning algorithms and statistical evaluations along with development of a predictive framework that can perform classification tasks on new, unseen data.

Keywords: artificial intelligence, biomarker, gene expression datasets, hepatocellular carcinoma, machine learning, supervised learning algorithms, support vector machine

Procedia PDF Downloads 429
2898 Information Management Approach in the Prediction of Acute Appendicitis

Authors: Ahmad Shahin, Walid Moudani, Ali Bekraki

Abstract:

This research aims at presenting a predictive data mining model to handle an accurate diagnosis of acute appendicitis with patients for the purpose of maximizing the health service quality, minimizing morbidity/mortality, and reducing cost. However, acute appendicitis is the most common disease which requires timely accurate diagnosis and needs surgical intervention. Although the treatment of acute appendicitis is simple and straightforward, its diagnosis is still difficult because no single sign, symptom, laboratory or image examination accurately confirms the diagnosis of acute appendicitis in all cases. This contributes in increasing morbidity and negative appendectomy. In this study, the authors propose to generate an accurate model in prediction of patients with acute appendicitis which is based, firstly, on the segmentation technique associated to ABC algorithm to segment the patients; secondly, on applying fuzzy logic to process the massive volume of heterogeneous and noisy data (age, sex, fever, white blood cell, neutrophilia, CRP, urine, ultrasound, CT, appendectomy, etc.) in order to express knowledge and analyze the relationships among data in a comprehensive manner; and thirdly, on applying dynamic programming technique to reduce the number of data attributes. The proposed model is evaluated based on a set of benchmark techniques and even on a set of benchmark classification problems of osteoporosis, diabetes and heart obtained from the UCI data and other data sources.

Keywords: healthcare management, acute appendicitis, data mining, classification, decision tree

Procedia PDF Downloads 350
2897 Transformers in Gene Expression-Based Classification

Authors: Babak Forouraghi

Abstract:

A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations of previous approaches, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with attention mechanism. In a previous work on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.

Keywords: transformers, generative ai, gene expression design, classification

Procedia PDF Downloads 59
2896 Using Mixed Methods in Studying Classroom Social Network Dynamics

Authors: Nashrawan Naser Taha, Andrew M. Cox

Abstract:

In a multi-cultural learning context, where ties are weak and dynamic, combining qualitative with quantitative research methods may be more effective. Such a combination may also allow us to answer different types of question, such as about people’s perception of the network. In this study the use of observation, interviews and photos were explored as ways of enhancing data from social network questionnaires. Integrating all of these methods was found to enhance the quality of data collected and its accuracy, also providing a richer story of the network dynamics and the factors that shaped these changes over time.

Keywords: mixed methods, social network analysis, multi-cultural learning, social network dynamics

Procedia PDF Downloads 510
2895 Development of Membrane Reactor for Auto Thermal Reforming of Dimethyl Ether for Hydrogen Production

Authors: Tie-Qing Zhang, Seunghun Jung, Young-Bae Kim

Abstract:

This research is devoted to developing a membrane reactor to flexibly meet the hydrogen demand of onboard fuel cells, which is an important part of green energy development. Among many renewable chemical products, dimethyl ether (DME) has the advantages of low reaction temperature (400 °C in this study), high hydrogen atom content, low toxicity, and easy preparation. Autothermal reforming, on the other hand, has a high hydrogen recovery rate and exhibits thermal neutrality during the reaction process, so the additional heat source in the hydrogen production process can be omitted. Therefore, the DME auto thermal reforming process was adopted in this study. To control the temperature of the reaction catalyst bed and hydrogen production rate, a Model Predictive Control (MPC) scheme was designed. Taking the above two variables as the control objectives, stable operation of the reformer can be achieved by controlling the flow rates of DME, steam, and high-purity air in real-time. To prevent catalyst poisoning in the fuel cell, the hydrogen needs to be purified to reduce the carbon monoxide content to below 50 ppm. Therefore, a Pd-Ag hydrogen semi-permeable membrane with a thickness of 3-5 μm was inserted into the auto thermal reactor, and the permeation efficiency of hydrogen was improved by steam purging on the permeation side. Finally, hydrogen with a purity of 99.99 was obtained.

Keywords: hydrogen production, auto thermal reforming, membrane, fuel cell

Procedia PDF Downloads 104
2894 Binary Logistic Regression Model in Predicting the Employability of Senior High School Graduates

Authors: Cromwell F. Gopo, Joy L. Picar

Abstract:

This study aimed to predict the employability of senior high school graduates for S.Y. 2018- 2019 in the Davao del Norte Division through quantitative research design using the descriptive status and predictive approaches among the indicated parameters, namely gender, school type, academics, academic award recipient, skills, values, and strand. The respondents of the study were the 33 secondary schools offering senior high school programs identified through simple random sampling, which resulted in 1,530 cases of graduates’ secondary data, which were analyzed using frequency, percentage, mean, standard deviation, and binary logistic regression. Results showed that the majority of the senior high school graduates who come from large schools were females. Further, less than half of these graduates received any academic award in any semester. In general, the graduates’ performance in academics, skills, and values were proficient. Moreover, less than half of the graduates were not employed. Then, those who were employed were either contractual, casual, or part-time workers dominated by GAS graduates. Further, the predictors of employability were gender and the Information and Communications Technology (ICT) strand, while the remaining variables did not add significantly to the model. The null hypothesis had been rejected as the coefficients of the predictors in the binary logistic regression equation did not take the value of 0. After utilizing the model, it was concluded that Technical-Vocational-Livelihood (TVL) graduates except ICT had greater estimates of employability.

Keywords: employability, senior high school graduates, Davao del Norte, Philippines

Procedia PDF Downloads 152
2893 Digital Phase Shifting Holography in a Non-Linear Interferometer using Undetected Photons

Authors: Sebastian Töpfer, Marta Gilaberte Basset, Jorge Fuenzalida, Fabian Steinlechner, Juan P. Torres, Markus Gräfe

Abstract:

This work introduces a combination of digital phase-shifting holography with a non-linear interferometer using undetected photons. Non-linear interferometers can be used in combination with a measurement scheme called quantum imaging with undetected photons, which allows for the separation of the wavelengths used for sampling an object and detecting it in the imaging sensor. This method recently faced increasing attention, as it allows to use of exotic wavelengths (e.g., mid-infrared, ultraviolet) for object interaction while at the same time keeping the detection in spectral areas with highly developed, comparable low-cost imaging sensors. The object information, including its transmission and phase influence, is recorded in the form of an interferometric pattern. To collect these, this work combines the method of quantum imaging with undetected photons with digital phase-shifting holography with a minimal sampling of the interference. With this, the quantum imaging scheme gets extended in its measurement capabilities and brings it one step closer to application. Quantum imaging with undetected photons uses correlated photons generated by spontaneous parametric down-conversion in a non-linear interferometer to create indistinguishable photon pairs, which leads to an effect called induced coherence without induced emission. Placing an object inside changes the interferometric pattern depending on the object’s properties. Digital phase-shifting holography records multiple images of the interference with determined phase shifts to reconstruct the complete interference shape, which can afterward be used to analyze the changes introduced by the object and conclude its properties. An extensive characterization of this method was done using a proof-of-principle setup. The measured spatial resolution, phase accuracy, and transmission accuracy are compared for different combinations of camera exposure times and the number of interference sampling steps. The current limits of this method are shown to allow further improvements. To summarize, this work presents an alternative holographic measurement method using non-linear interferometers in combination with quantum imaging to enable new ways of measuring and motivating continuing research.

Keywords: digital holography, quantum imaging, quantum holography, quantum metrology

Procedia PDF Downloads 92
2892 The Enhancement of Target Localization Using Ship-Borne Electro-Optical Stabilized Platform

Authors: Jaehoon Ha, Byungmo Kang, Kilho Hong, Jungsoo Park

Abstract:

Electro-optical (EO) stabilized platforms have been widely used for surveillance and reconnaissance on various types of vehicles, from surface ships to unmanned air vehicles (UAVs). EO stabilized platforms usually consist of an assembly of structure, bearings, and motors called gimbals in which a gyroscope is installed. EO elements such as a CCD camera and IR camera, are mounted to a gimbal, which has a range of motion in elevation and azimuth and can designate and track a target. In addition, a laser range finder (LRF) can be added to the gimbal in order to acquire the precise slant range from the platform to the target. Recently, a versatile functionality of target localization is needed in order to cooperate with the weapon systems that are mounted on the same platform. The target information, such as its location or velocity, needed to be more accurate. The accuracy of the target information depends on diverse component errors and alignment errors of each component. Specially, the type of moving platform can affect the accuracy of the target information. In the case of flying platforms, or UAVs, the target location error can be increased with altitude so it is important to measure altitude as precisely as possible. In the case of surface ships, target location error can be increased with obliqueness of the elevation angle of the gimbal since the altitude of the EO stabilized platform is supposed to be relatively low. The farther the slant ranges from the surface ship to the target, the more extreme the obliqueness of the elevation angle. This can hamper the precise acquisition of the target information. So far, there have been many studies on EO stabilized platforms of flying vehicles. However, few researchers have focused on ship-borne EO stabilized platforms of the surface ship. In this paper, we deal with a target localization method when an EO stabilized platform is located on the mast of a surface ship. Especially, we need to overcome the limitation caused by the obliqueness of the elevation angle of the gimbal. We introduce a well-known approach for target localization using Unscented Kalman Filter (UKF) and present the problem definition showing the above-mentioned limitation. Finally, we want to show the effectiveness of the approach that will be demonstrated through computer simulations.

Keywords: target localization, ship-borne electro-optical stabilized platform, unscented kalman filter

Procedia PDF Downloads 520
2891 From Linear to Circular Model: An Artificial Intelligence-Powered Approach in Fosso Imperatore

Authors: Carlotta D’Alessandro, Giuseppe Ioppolo, Katarzyna Szopik-Depczyńska

Abstract:

— The growing scarcity of resources and the mounting pressures of climate change, water pollution, and chemical contamination have prompted societies, governments, and businesses to seek ways to minimize their environmental impact. To combat climate change, and foster sustainability, Industrial Symbiosis (IS) offers a powerful approach, facilitating the shift toward a circular economic model. IS has gained prominence in the European Union's policy framework as crucial enabler of resource efficiency and circular economy practices. The essence of IS lies in the collaborative sharing of resources such as energy, material by-products, waste, and water, thanks to geographic proximity. It can be exemplified by eco-industrial parks (EIPs), which are natural environments for boosting cooperation and resource sharing between businesses. EIPs are characterized by group of businesses situated in proximity, connected by a network of both cooperative and competitive interactions. They represent a sustainable industrial model aimed at reducing resource use, waste, and environmental impact while fostering economic and social wellbeing. IS, combined with Artificial Intelligence (AI)-driven technologies, can further optimize resource sharing and efficiency within EIPs. This research, supported by the “CE_IPs” project, aims to analyze the potential for IS and AI, in advancing circularity and sustainability at Fosso Imperatore. The Fosso Imperatore Industrial Park in Nocera Inferiore, Italy, specializes in agriculture and the industrial transformation of agricultural products, particularly tomatoes, tobacco, and textile fibers. This unique industrial cluster, centered around tomato cultivation and processing, also includes mechanical engineering enterprises and agricultural packaging firms. To stimulate the shift from a traditional to a circular economic model, an AI-powered Local Development Plan (LDP) is developed for Fosso Imperatore. It can leverage data analytics, predictive modeling, and stakeholder engagement to optimize resource utilization, reduce waste, and promote sustainable industrial practices. A comprehensive SWOT analysis of the AI-powered LDP revealed several key factors influencing its potential success and challenges. Among the notable strengths and opportunities arising from AI implementation are reduced processing times, fewer human errors, and increased revenue generation. Furthermore, predictive analytics minimize downtime, bolster productivity, and elevate quality while mitigating workplace hazards. However, the integration of AI also presents potential weaknesses and threats, including significant financial investment, since implementing and maintaining AI systems can be costly. The widespread adoption of AI could lead to job losses in certain sectors. Lastly, AI systems are susceptible to cyberattacks, posing risks to data security and operational continuity. Moreover, an Analytic Hierarchy Process (AHP) analysis was employed to yield a prioritized ranking of the outlined AI-driven LDP practices based on the stakeholder input, ensuring a more comprehensive and representative understanding of their relative significance for achieving sustainability in Fosso Imperatore Industrial Park. While this study provides valuable insights into the potential of AIpowered LDP at the Fosso Imperatore, it is important to note that the findings may not be directly applicable to all industrial parks, particularly those with different sizes, geographic locations, or industry compositions. Additional study is necessary to scrutinize the generalizability of these results and to identify best practices for implementing AI-driven LDP in diverse contexts.

Keywords: artificial intelligence, climate change, Fosso Imperatore, industrial park, industrial symbiosis

Procedia PDF Downloads 25
2890 Development of a New Device for Bending Fatigue Testing

Authors: B. Mokhtarnia, M. Layeghi

Abstract:

This work presented an original bending fatigue-testing setup for fatigue characterization of composite materials. A three-point quasi-static setup was introduced that was capable of applying stress control load in different loading waveforms, frequencies, and stress ratios. This setup was equipped with computerized measuring instruments to evaluate fatigue damage mechanisms. A detailed description of its different parts and working features was given, and dynamic analysis was done to verify the functional accuracy of the device. Feasibility was validated successfully by conducting experimental fatigue tests.

Keywords: bending fatigue, quasi-static testing setup, experimental fatigue testing, composites

Procedia PDF Downloads 132
2889 DEMs: A Multivariate Comparison Approach

Authors: Juan Francisco Reinoso Gordo, Francisco Javier Ariza-López, José Rodríguez Avi, Domingo Barrera Rosillo

Abstract:

The evaluation of the quality of a data product is based on the comparison of the product with a reference of greater accuracy. In the case of MDE data products, quality assessment usually focuses on positional accuracy and few studies consider other terrain characteristics, such as slope and orientation. The proposal that is made consists of evaluating the similarity of two DEMs (a product and a reference), through the joint analysis of the distribution functions of the variables of interest, for example, elevations, slopes and orientations. This is a multivariable approach that focuses on distribution functions, not on single parameters such as mean values or dispersions (e.g. root mean squared error or variance). This is considered to be a more holistic approach. The use of the Kolmogorov-Smirnov test is proposed due to its non-parametric nature, since the distributions of the variables of interest cannot always be adequately modeled by parametric models (e.g. the Normal distribution model). In addition, its application to the multivariate case is carried out jointly by means of a single test on the convolution of the distribution functions of the variables considered, which avoids the use of corrections such as Bonferroni when several statistics hypothesis tests are carried out together. In this work, two DEM products have been considered, DEM02 with a resolution of 2x2 meters and DEM05 with a resolution of 5x5 meters, both generated by the National Geographic Institute of Spain. DEM02 is considered as the reference and DEM05 as the product to be evaluated. In addition, the slope and aspect derived models have been calculated by GIS operations on the two DEM datasets. Through sample simulation processes, the adequate behavior of the Kolmogorov-Smirnov statistical test has been verified when the null hypothesis is true, which allows calibrating the value of the statistic for the desired significance value (e.g. 5%). Once the process has been calibrated, the same process can be applied to compare the similarity of different DEM data sets (e.g. the DEM05 versus the DEM02). In summary, an innovative alternative for the comparison of DEM data sets based on a multinomial non-parametric perspective has been proposed by means of a single Kolmogorov-Smirnov test. This new approach could be extended to other DEM features of interest (e.g. curvature, etc.) and to more than three variables

Keywords: data quality, DEM, kolmogorov-smirnov test, multivariate DEM comparison

Procedia PDF Downloads 115
2888 Statistical Mechanical Approach in Modeling of Hybrid Solar Cells for Photovoltaic Applications

Authors: A. E. Kobryn

Abstract:

We present both descriptive and predictive modeling of structural properties of blends of PCBM or organic-inorganic hybrid perovskites of the type CH3NH3PbX3 (X=Cl, Br, I) with P3HT, P3BT or squaraine SQ2 dye sensitizer, including adsorption on TiO2 clusters having rutile (110) surface. In our study, we use a methodology that allows computing the microscopic structure of blends on the nanometer scale and getting insight on miscibility of its components at various thermodynamic conditions. The methodology is based on the integral equation theory of molecular liquids in the reference interaction site representation/model (RISM) and uses the universal force field. Input parameters for RISM, such as optimized molecular geometries and charge distribution of interaction sites, are derived with the use of the density functional theory methods. To compare the diffusivity of the PCBM in binary blends with P3HT and P3BT, respectively, the study is complemented with MD simulation. A very good agreement with experiment and the reports of alternative modeling or simulation is observed for PCBM in P3HT system. The performance of P3BT with perovskites, however, seems as expected. The calculated nanoscale morphologies of blends of P3HT, P3BT or SQ2 with perovskites, including adsorption on TiO2, are all new and serve as an instrument in rational design of organic/hybrid photovoltaics. They are used in collaboration with experts who actually make prototypes or devices for practical applications.

Keywords: multiscale theory and modeling, nanoscale morphology, organic-inorganic halide perovskites, three dimensional distribution

Procedia PDF Downloads 155
2887 Combining the Deep Neural Network with the K-Means for Traffic Accident Prediction

Authors: Celso L. Fernando, Toshio Yoshii, Takahiro Tsubota

Abstract:

Understanding the causes of a road accident and predicting their occurrence is key to preventing deaths and serious injuries from road accident events. Traditional statistical methods such as the Poisson and the Logistics regressions have been used to find the association of the traffic environmental factors with the accident occurred; recently, an artificial neural network, ANN, a computational technique that learns from historical data to make a more accurate prediction, has emerged. Although the ability to make accurate predictions, the ANN has difficulty dealing with highly unbalanced attribute patterns distribution in the training dataset; in such circumstances, the ANN treats the minority group as noise. However, in the real world data, the minority group is often the group of interest; e.g., in the road traffic accident data, the events of the accident are the group of interest. This study proposes a combination of the k-means with the ANN to improve the predictive ability of the neural network model by alleviating the effect of the unbalanced distribution of the attribute patterns in the training dataset. The results show that the proposed method improves the ability of the neural network to make a prediction on a highly unbalanced distributed attribute patterns dataset; however, on an even distributed attribute patterns dataset, the proposed method performs almost like a standard neural network.

Keywords: accident risks estimation, artificial neural network, deep learning, k-mean, road safety

Procedia PDF Downloads 163
2886 Improving the Design of Blood Pressure and Blood Saturation Monitors

Authors: L. Parisi

Abstract:

A blood pressure monitor or sphygmomanometer can be either manual or automatic, employing respectively either the auscultatory method or the oscillometric method. The manual version of the sphygmomanometer involves an inflatable cuff with a stethoscope adopted to detect the sounds generated by the arterial walls to measure blood pressure in an artery. An automatic sphygmomanometer can be effectively used to monitor blood pressure through a pressure sensor, which detects vibrations provoked by oscillations of the arterial walls. The pressure sensor implemented in this device improves the accuracy of the measurements taken.

Keywords: blood pressure, blood saturation, sensors, actuators, design improvement

Procedia PDF Downloads 455
2885 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI

Authors: Rutej R. Mehta, Michael A. Chappell

Abstract:

Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.

Keywords: arterial spin labelling, dispersion, MRI, perfusion

Procedia PDF Downloads 372
2884 Biomedical Definition Extraction Using Machine Learning with Synonymous Feature

Authors: Jian Qu, Akira Shimazu

Abstract:

OOV (Out Of Vocabulary) terms are terms that cannot be found in many dictionaries. Although it is possible to translate such OOV terms, the translations do not provide any real information for a user. We present an OOV term definition extraction method by using information available from the Internet. We use features such as occurrence of the synonyms and location distances. We apply machine learning method to find the correct definitions for OOV terms. We tested our method on both biomedical type and name type OOV terms, our work outperforms existing work with an accuracy of 86.5%.

Keywords: information retrieval, definition retrieval, OOV (out of vocabulary), biomedical information retrieval

Procedia PDF Downloads 496
2883 Eliminating Cutter-Path Deviation For Five-Axis Nc Machining

Authors: Alan C. Lin, Tsong Der Lin

Abstract:

This study proposes a deviation control method to add interpolation points to numerical control (NC) codes of five-axis machining in order to achieve the required machining accuracy. Specific research issues include: (1) converting machining data between the CL (cutter location) domain and the NC domain, (2) calculating the deviation between the deviated path and the linear path, (3) finding interpolation points, and (4) determining tool orientations for the interpolation points. System implementation with practical examples will also be included to highlight the applicability of the proposed methodology.

Keywords: CAD/CAM, cutter path, five-axis machining, numerical control

Procedia PDF Downloads 424
2882 Role of Vitamin D in Osseointegration of Dental Implant

Authors: Pouya Khaleghi

Abstract:

Dental implants are a successful treatment modality for restoring both function and aesthetics. Dental implant treatment has predictive results in the replacement of the lost teeth and has a high success rate even in the long term. The most important factor which is responsible for the positive course of implant treatment is the process of osseointegration between the implant structure and the host’s bone tissue. During recent years, many studies have focused on surgical and prosthetic factors, as well as the implant-related factors. However, implant failure still occurs despite the improvements that have led to the increased survival rate of dental implants, which suggests the possible role of some host-related risk factors. Vitamin D is a fat-soluble vitamin regulating calcium and phosphorus metabolism in tissues. The role of vitamin D in bone healing has been under investigation for several years. Vitamin D deficiency has also been associated with impaired and delayed callus formation and fractures healing; however, the role of vitamin D has not been clarified. Therefore, it is extremely important to study the phenomenon of a connection formed between bone tissue and the surface of a titanium implant and find correlations between the 25- hydroxycholecalciferol concentration in blood serum and the course of osseointegration. Because the processes of bone remodeling are very dynamic in the period of actual osseointegration, it is necessary to obtain the correct concentration of vitamin D3 metabolites in blood serum. In conclusion, the correct level of 25-hydroxycholecalciferol on the day of surgery and vitamin D deficiency treatment have a significant influence on the increase in the bone level at the implant site during the process of osseointegration assessed radiologically.

Keywords: implant, osseointegration, vitamin d, dental

Procedia PDF Downloads 174
2881 Pyramid Binary Pattern for Age Invariant Face Verification

Authors: Saroj Bijarnia, Preety Singh

Abstract:

We propose a simple and effective biometrics system based on face verification across aging using a new variant of texture feature, Pyramid Binary Pattern. This employs Local Binary Pattern along with its hierarchical information. Dimension reduction of generated texture feature vector is done using Principal Component Analysis. Support Vector Machine is used for classification. Our proposed method achieves an accuracy of 92:24% and can be used in an automated age-invariant face verification system.

Keywords: biometrics, age invariant, verification, support vector machine

Procedia PDF Downloads 353
2880 Comparative Connectionism: Study of the Biological Constraints of Learning Through the Manipulation of Various Architectures in a Neural Network Model under the Biological Principle of the Correlation Between Structure and Function

Authors: Giselle Maggie-Fer Castañeda Lozano

Abstract:

The main objective of this research was to explore the role of neural network architectures in simulating behavioral phenomena as a potential explanation for selective associations, specifically related to biological constraints on learning. Biological constraints on learning refer to the limitations observed in conditioning procedures, where learning is expected to occur. The study involved simulations of five different experiments exploring various phenomena and sources of biological constraints in learning. These simulations included the interaction between response and reinforcer, stimulus and reinforcer, specificity of stimulus-reinforcer associations, species differences, neuroanatomical constraints, and learning in uncontrolled conditions. The overall results demonstrated that by manipulating neural network architectures, conditions can be created to model and explain diverse biological constraints frequently reported in comparative psychology literature as learning typicities. Additionally, the simulations offer predictive content worthy of experimental testing in the pursuit of new discoveries regarding the specificity of learning. The implications and limitations of these findings are discussed. Finally, it is suggested that this research could inaugurate a line of inquiry involving the use of neural networks to study biological factors in behavior, fostering the development of more ethical and precise research practices.

Keywords: comparative psychology, connectionism, conditioning, experimental analysis of behavior, neural networks

Procedia PDF Downloads 71