Search results for: feature fusion
1378 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field
Authors: Jeronimo Cox, Tomonari Furukawa
Abstract:
Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.Keywords: motion tracking, sensor fusion, magnetometer, state estimation
Procedia PDF Downloads 891377 The Role of Vernacular Radio Stations in Enhancing Agricultural Development in Kenya; A Case of KASS FM
Authors: Thomas Kipkurgat, Silahs Chemwaina
Abstract:
Communication and ICT is a crucial component in realization of vision 2030, radio has played a key role in dissemination of information to mass audience. Since time immemorial, mass media has played a vital role in passing information on agricultural development issues both locally and internationally. This paper aimed at assessing the role of community radio stations in enhancing agricultural development in Kenya. The paper sought to identify the main contributions of KASS FM radio in the agricultural development especially in rural areas, the study also aimed to establish the appropriate adjustments in editorial policies of KASS FM radio in helping to promote agricultural development related programmes in rural areas. Despite some weaknesses in radio programming and the mode of interaction with the rural people, the findings of this study showed that the rural communities are better off today than in the old days when FM radios were non-existent. KASS FM has come up with different developmental programmes that have positively contributed to changing the rural people’s ways of life. These programmes include farming, health, marital values, environment, cultural issues, human rights, democracy, religious teachings, peace and reconciliation. Such programmes feature experts, professionals and opinion leaders who address numerous topics of interest to the community. The local people participate in the production of these programmes through letters to the editor, and phone-ins, among others. Programmes such as political talk shows, which feature in KASS FM, has become one of the most important ways of community participation. The interpretation and conclusions are based on the empirical data analysis and the theories of development advanced by international development communication scholars, as presented in the paper. The study ends with some recommendations on how KASS FM can best serve the interests of the poor people in rural areas, and helps improve their lives.Keywords: agriculture, development, communication, KASS FM, radio, rural areas, Kenya
Procedia PDF Downloads 2981376 Speech Emotion Recognition: A DNN and LSTM Comparison in Single and Multiple Feature Application
Authors: Thiago Spilborghs Bueno Meyer, Plinio Thomaz Aquino Junior
Abstract:
Through speech, which privileges the functional and interactive nature of the text, it is possible to ascertain the spatiotemporal circumstances, the conditions of production and reception of the discourse, the explicit purposes such as informing, explaining, convincing, etc. These conditions allow bringing the interaction between humans closer to the human-robot interaction, making it natural and sensitive to information. However, it is not enough to understand what is said; it is necessary to recognize emotions for the desired interaction. The validity of the use of neural networks for feature selection and emotion recognition was verified. For this purpose, it is proposed the use of neural networks and comparison of models, such as recurrent neural networks and deep neural networks, in order to carry out the classification of emotions through speech signals to verify the quality of recognition. It is expected to enable the implementation of robots in a domestic environment, such as the HERA robot from the RoboFEI@Home team, which focuses on autonomous service robots for the domestic environment. Tests were performed using only the Mel-Frequency Cepstral Coefficients, as well as tests with several characteristics of Delta-MFCC, spectral contrast, and the Mel spectrogram. To carry out the training, validation and testing of the neural networks, the eNTERFACE’05 database was used, which has 42 speakers from 14 different nationalities speaking the English language. The data from the chosen database are videos that, for use in neural networks, were converted into audios. It was found as a result, a classification of 51,969% of correct answers when using the deep neural network, when the use of the recurrent neural network was verified, with the classification with accuracy equal to 44.09%. The results are more accurate when only the Mel-Frequency Cepstral Coefficients are used for the classification, using the classifier with the deep neural network, and in only one case, it is possible to observe a greater accuracy by the recurrent neural network, which occurs in the use of various features and setting 73 for batch size and 100 training epochs.Keywords: emotion recognition, speech, deep learning, human-robot interaction, neural networks
Procedia PDF Downloads 1751375 Improving Security by Using Secure Servers Communicating via Internet with Standalone Secure Software
Authors: Carlos Gonzalez
Abstract:
This paper describes the use of the Internet as a feature to enhance the security of our software that is going to be distributed/sold to users potentially all over the world. By placing in a secure server some of the features of the secure software, we increase the security of such software. The communication between the protected software and the secure server is done by a double lock algorithm. This paper also includes an analysis of intruders and describes possible responses to detect threats.Keywords: internet, secure software, threats, cryptography process
Procedia PDF Downloads 3371374 Online Dietary Management System
Authors: Kyle Yatich Terik, Collins Oduor
Abstract:
The current healthcare system has made healthcare more accessible and efficient by the use of information technology through the implementation of computer algorithms that generate menus based on the diagnosis. While many systems just like these have been created over the years, their main objective is to help healthy individuals calculate their calorie intake and assist them by providing food selections based on a pre-specified calorie. That application has been proven to be useful in some ways, and they are not suitable for monitoring, planning, and managing hospital patients, especially that critical condition their dietary needs. The system also addresses a number of objectives, such as; the main objective is to be able to design, develop and implement an efficient, user-friendly as well as and interactive dietary management system. The specific design development objectives include developing a system that will facilitate a monitoring feature for users using graphs, developing a system that will provide system-generated reports to the users, dietitians, and system admins, design a system that allows users to measure their BMI (Body Mass Index), the system will also provide food template feature that will guide the user on a balanced diet plan. In order to develop the system, further research was carried out in Kenya, Nairobi County, using online questionnaires being the preferred research design approach. From the 44 respondents, one could create discussions such as the major challenges encountered from the manual dietary system, which include no easily accessible information of the calorie intake for food products, expensive to physically visit a dietitian to create a tailored diet plan. Conclusively, the system has the potential of improving the quality of life of people as a whole by providing a standard for healthy living and allowing individuals to have readily available knowledge through food templates that will guide people and allow users to create their own diet plans that consist of a balanced diet.Keywords: DMS, dietitian, patient, administrator
Procedia PDF Downloads 1661373 GAILoc: Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence
Authors: Getaneh Berie Tarekegn
Abstract:
A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 801372 The Plasma Additional Heating Systems by Electron Cyclotron Waves
Authors: Ghoutia Naima Sabri, Tayeb Benouaz
Abstract:
The interaction between wave and electron cyclotron movement when the electron passes through a layer of resonance at a fixed frequency results an Electron Cyclotron (EC) absorption in Tokamak plasma and dependent magnetic field. This technique is the principle of additional heating (ECRH) and the generation of non-inductive current drive (ECCD) in modern fusion devices. In this paper we are interested by the problem of EC absorption which used a microscopic description of kinetic theory treatment versus the propagation which used the cold plasma description. The power absorbed depends on the optical depth which in turn depends on coefficient of absorption and the order of the excited harmonic for O-mode or X-mode. There is another possibility of heating by dissipation of Alfven waves, based on resonance of cold plasma waves, the shear Alfven wave (SW) and the compressional Alfven wave (FW). Once the (FW) power is coupled to (SW), it stays on the magnetic surface and dissipates there, which cause the heating of bulk plasmas.Keywords: electron cyclotron, heating, plasma, tokamak
Procedia PDF Downloads 5181371 Radar Track-based Classification of Birds and UAVs
Authors: Altilio Rosa, Chirico Francesco, Foglia Goffredo
Abstract:
In recent years, the number of Unmanned Aerial Vehicles (UAVs) has significantly increased. The rapid development of commercial and recreational drones makes them an important part of our society. Despite the growing list of their applications, these vehicles pose a huge threat to civil and military installations: detection, classification and neutralization of such flying objects become an urgent need. Radar is an effective remote sensing tool for detecting and tracking flying objects, but scenarios characterized by the presence of a high number of tracks related to flying birds make especially challenging the drone detection task: operator PPI is cluttered with a huge number of potential threats and his reaction time can be severely affected. Flying birds compared to UAVs show similar velocity, RADAR cross-section and, in general, similar characteristics. Building from the absence of a single feature that is able to distinguish UAVs and birds, this paper uses a multiple features approach where an original feature selection technique is developed to feed binary classifiers trained to distinguish birds and UAVs. RADAR tracks acquired on the field and related to different UAVs and birds performing various trajectories were used to extract specifically designed target movement-related features based on velocity, trajectory and signal strength. An optimization strategy based on a genetic algorithm is also introduced to select the optimal subset of features and to estimate the performance of several classification algorithms (Neural network, SVM, Logistic regression…) both in terms of the number of selected features and misclassification error. Results show that the proposed methods are able to reduce the dimension of the data space and to remove almost all non-drone false targets with a suitable classification accuracy (higher than 95%).Keywords: birds, classification, machine learning, UAVs
Procedia PDF Downloads 2281370 Experimental Study on Hardness and Impact Strength of Polyethylene/Carbon Composites
Authors: Armin Najipour, A. M. Fattahi
Abstract:
The aim of this research was to investigate the effect of the addition of multi walled carbon nanotubes on the mechanical properties of polyethylene/carbon nanotube nanocomposites. To do so, polyethylene and carbon nanotube were mixed in different weight percentages containing 0, 0.5, 1, and 1.5% carbon nanotube in two screw extruder apparatus by fusion. Then the nanocomposite samples were molded in injection apparatus according to ASTM: D6110 standard. The effects of carbon nanotube addition in 4 different levels and injection pressure in 2 levels on the hardness and impact strength of the nanocomposite samples were investigated. The results showed that the addition of carbon nanotube had a significant effect on improving hardness and impact strength of the nanocomposite samples such that by adding 1% w/w carbon nanotube, the impact strength and hardness of the samples improved to 74% and 46.7% respectively. Also, according to the results, the effect of injection pressure on the results was much less than that of carbon nanotube weight percentage.Keywords: carbon nanotube, injection molding, mechanical properties, nanocomposite, polyethylene
Procedia PDF Downloads 3241369 Advancements in Predicting Diabetes Biomarkers: A Machine Learning Epigenetic Approach
Authors: James Ladzekpo
Abstract:
Background: The urgent need to identify new pharmacological targets for diabetes treatment and prevention has been amplified by the disease's extensive impact on individuals and healthcare systems. A deeper insight into the biological underpinnings of diabetes is crucial for the creation of therapeutic strategies aimed at these biological processes. Current predictive models based on genetic variations fall short of accurately forecasting diabetes. Objectives: Our study aims to pinpoint key epigenetic factors that predispose individuals to diabetes. These factors will inform the development of an advanced predictive model that estimates diabetes risk from genetic profiles, utilizing state-of-the-art statistical and data mining methods. Methodology: We have implemented a recursive feature elimination with cross-validation using the support vector machine (SVM) approach for refined feature selection. Building on this, we developed six machine learning models, including logistic regression, k-Nearest Neighbors (k-NN), Naive Bayes, Random Forest, Gradient Boosting, and Multilayer Perceptron Neural Network, to evaluate their performance. Findings: The Gradient Boosting Classifier excelled, achieving a median recall of 92.17% and outstanding metrics such as area under the receiver operating characteristics curve (AUC) with a median of 68%, alongside median accuracy and precision scores of 76%. Through our machine learning analysis, we identified 31 genes significantly associated with diabetes traits, highlighting their potential as biomarkers and targets for diabetes management strategies. Conclusion: Particularly noteworthy were the Gradient Boosting Classifier and Multilayer Perceptron Neural Network, which demonstrated potential in diabetes outcome prediction. We recommend future investigations to incorporate larger cohorts and a wider array of predictive variables to enhance the models' predictive capabilities.Keywords: diabetes, machine learning, prediction, biomarkers
Procedia PDF Downloads 601368 Producing Graphical User Interface from Activity Diagrams
Authors: Ebitisam K. Elberkawi, Mohamed M. Elammari
Abstract:
Graphical User Interface (GUI) is essential to programming, as is any other characteristic or feature, due to the fact that GUI components provide the fundamental interaction between the user and the program. Thus, we must give more interest to GUI during building and development of systems. Also, we must give a greater attention to the user who is the basic corner in the dealing with the GUI. This paper introduces an approach for designing GUI from one of the models of business workflows which describe the workflow behavior of a system, specifically through activity diagrams (AD).Keywords: activity diagram, graphical user interface, GUI components, program
Procedia PDF Downloads 4681367 Robust and Dedicated Hybrid Cloud Approach for Secure Authorized Deduplication
Authors: Aishwarya Shekhar, Himanshu Sharma
Abstract:
Data deduplication is one of important data compression techniques for eliminating duplicate copies of repeating data, and has been widely used in cloud storage to reduce the amount of storage space and save bandwidth. In this process, duplicate data is expunged, leaving only one copy means single instance of the data to be accumulated. Though, indexing of each and every data is still maintained. Data deduplication is an approach for minimizing the part of storage space an organization required to retain its data. In most of the company, the storage systems carry identical copies of numerous pieces of data. Deduplication terminates these additional copies by saving just one copy of the data and exchanging the other copies with pointers that assist back to the primary copy. To ignore this duplication of the data and to preserve the confidentiality in the cloud here we are applying the concept of hybrid nature of cloud. A hybrid cloud is a fusion of minimally one public and private cloud. As a proof of concept, we implement a java code which provides security as well as removes all types of duplicated data from the cloud.Keywords: confidentiality, deduplication, data compression, hybridity of cloud
Procedia PDF Downloads 3861366 SNR Classification Using Multiple CNNs
Authors: Thinh Ngo, Paul Rad, Brian Kelley
Abstract:
Noise estimation is essential in today wireless systems for power control, adaptive modulation, interference suppression and quality of service. Deep learning (DL) has already been applied in the physical layer for modulation and signal classifications. Unacceptably low accuracy of less than 50% is found to undermine traditional application of DL classification for SNR prediction. In this paper, we use divide-and-conquer algorithm and classifier fusion method to simplify SNR classification and therefore enhances DL learning and prediction. Specifically, multiple CNNs are used for classification rather than a single CNN. Each CNN performs a binary classification of a single SNR with two labels: less than, greater than or equal. Together, multiple CNNs are combined to effectively classify over a range of SNR values from −20 ≤ SNR ≤ 32 dB.We use pre-trained CNNs to predict SNR over a wide range of joint channel parameters including multiple Doppler shifts (0, 60, 120 Hz), power-delay profiles, and signal-modulation types (QPSK,16QAM,64-QAM). The approach achieves individual SNR prediction accuracy of 92%, composite accuracy of 70% and prediction convergence one order of magnitude faster than that of traditional estimation.Keywords: classification, CNN, deep learning, prediction, SNR
Procedia PDF Downloads 1371365 CPW-Fed Broadband Circularly Polarized Planar Antenna with Improved Ground
Authors: Gnanadeep Gudapati, V. Annie Grace
Abstract:
A broadband circular polarization (CP) feature is designed for a CPW-fed planar printed monopole antenna. A rectangle patch and an improved ground plane make up the antenna. The antenna's impedance bandwidth can be increased by adding a vertical stub and a horizontal slit in the ground plane. The measured results show that the proposed antenna has a wide 10-dB return loss bandwidth of 70.2% (4.35GHz, 3.7-8.1GHz) centered at 4.2 GHz.Keywords: CPW-fed, circular polarised, FR4 epoxy, slit and stub
Procedia PDF Downloads 1531364 Cell Surface Display of Xylanase on Escherichia coli by TibA Autotransporter
Authors: Yeng Min Yi, Rosli Md Illias, Salehhuddin Hamdan
Abstract:
Industrial biocatalysis is mainly based on the use of cell free or intracellular enzyme systems. However, the expensive cost and relatively lower operational stability of free enzymes limit practical use in industries. Cell surface display system can be used as a cost-efficient alternative to overcome the laborious purification and substrate transport limitation. In this research, TibA autotransporter from E. coli was used to display Aspergillus fumigatus xylanase (xyn). The amplified xyn was fused in between N-terminal signal peptide and C-terminal β-barrel of TibA. The cloned was transformed and expressed in E. coli BL21 (DE3). Outer membrane localization of TibA-xyn fusion protein was confirmed by SDS PAGE and western blot with expected size of 62.5 kDa. Functional display of xyn was examined by activity assay. Cell surface displayed xyn exhibited the highest activity at 37 °c, 0.3 mM IPTG. As a summary, TibA displaying system has the potential for further industrial applications. Moreover, this is the first report of the display of xylanase using TibA on the surface of E. coli.Keywords: biocatalysis, cell surface display, Escherichia coli, TibA autotransporter
Procedia PDF Downloads 2861363 Experimental Study on Tensile Strength of Polyethylene/Carbon Injected Composites
Authors: Armin Najipour, A. M. Fattahi
Abstract:
The aim of this research was to investigate the effect of the addition of multi walled carbon nanotubes on the mechanical properties of polyethylene/carbon nanotube nanocomposites. To do so, polyethylene and carbon nanotube were mixed in different weight percentages containing 0, 0.5, 1, and 1.5% carbon nanotube in two screw extruder apparatus by fusion. Then the nanocomposite samples were molded in injection apparatus according to ASTM:D638 standard. The effects of carbon nanotube addition in 4 different levels on the tensile strength, elastic modulus and elongation of the nanocomposite samples were investigated. The results showed that the addition of carbon nanotube had a significant effect on improving tensile strength of the nanocomposite samples such that by adding 1% w/w carbon nanotube, the tensile strength 23.4%,elastic modulus 60.4%and elongation 29.7% of the samples improved. Also, according to the results, Manera approximation model at percentages about 0.5% weight and modified Halpin-Tsai at percentages about 1% weight lead to favorite and reliable results.Keywords: carbon nanotube, injection molding, Mechanical properties, Nanocomposite, polyethylene
Procedia PDF Downloads 2731362 An Approach to Solving Some Inverse Problems for Parabolic Equations
Authors: Bolatbek Rysbaiuly, Aliya S. Azhibekova
Abstract:
Problems concerning the interpretation of the well testing results belong to the class of inverse problems of subsurface hydromechanics. The distinctive feature of such problems is that additional information is depending on the capabilities of oilfield experiments. Another factor that should not be overlooked is the existence of errors in the test data. To determine reservoir properties, some inverse problems for parabolic equations were investigated. An approach to solving the inverse problems based on the method of regularization is proposed.Keywords: iterative approach, inverse problem, parabolic equation, reservoir properties
Procedia PDF Downloads 4311361 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 931360 Automatic Detection Of Diabetic Retinopathy
Authors: Zaoui Ismahene, Bahri Sidi Mohamed, Abbassa Nadira
Abstract:
Diabetic Retinopathy (DR) is a leading cause of vision impairment and blindness among individuals with diabetes. Early diagnosis is crucial for effective treatment, yet current diagnostic methods rely heavily on manual analysis of retinal images, which can be time-consuming and prone to subjectivity. This research proposes an automated system for the detection of DR using Jacobi wavelet-based feature extraction combined with Support Vector Machines (SVM) for classification. The integration of wavelet analysis with machine learning techniques aims to improve the accuracy, efficiency, and reliability of DR diagnosis. In this study, retinal images are preprocessed through normalization, resizing, and noise reduction to enhance the quality of the images. The Jacobi wavelet transform is then applied to extract both global and local features, effectively capturing subtle variations in retinal images that are indicative of DR. These extracted features are fed into an SVM classifier, known for its robustness in handling high-dimensional data and its ability to achieve high classification accuracy. The SVM classifier is optimized using parameter tuning to improve performance. The proposed methodology is evaluated using a comprehensive dataset of retinal images, encompassing a range of DR severity levels. The results show that the proposed system outperforms traditional wavelet-based methods, demonstrating significantly higher accuracy, sensitivity, and specificity in detecting DR. By leveraging the discriminative power of Jacobi wavelet features and the robustness of SVM, the system provides a promising solution for the automatic detection of DR, which could assist ophthalmologists in early diagnosis and intervention, ultimately improving patient outcomes. This research highlights the potential of combining wavelet-based image processing with machine learning for advancing automated medical diagnostics.Keywords: iabetic retinopathy (DR), Jacobi wavelets, machine learning, feature extraction, classification
Procedia PDF Downloads 121359 A Prospective Study on the Efficacy of Mesenchymal Stem Cells in Intervertebral Disc Regeneration
Authors: Prabhu Thangaraju, Manoj Deepak, A. Sivakumar
Abstract:
Removal of inter vertebral disc along with spinal fusion has many disadvantages such as causing stress fractures. If it is possible regenerate the spine it would be possible avoid the complications of the surgery and achieve better results. Our study involves the use of mesenchymal stem cells in regenerating the discs. Our study involved 10 patients who presented with degenerative disc disease between 2008-2011 in our hospital. After adequate pre-operative check prepared mesenchymal stem cells were injected into the disc spaces. These patients were subjected to conservative therapy for a minimum of six weeks before they were accepted into the study. They were followed up regularly for a minimum of 2years with serial radiographs and MRI. 8 out of the 10 patients had completed reduction in the pain. The T2 weighted MRI images in 9 out of the 10 patients showed a bright signal compared the previous Images which indicated that there was improvement in the hydration levels. From the case study of 10 patients who were subjected to mesenchymal cell therapy in our hospital, we can conclude that the use of mesenchymal cells in treatment of intervertebral disc degeneration in a safe and effective option.Keywords: mesenchymal stem cells, intervertebral disc, the spine, disc degeneration
Procedia PDF Downloads 3751358 Exploring the Applications of Neural Networks in the Adaptive Learning Environment
Authors: Baladitya Swaika, Rahul Khatry
Abstract:
Computer Adaptive Tests (CATs) is one of the most efficient ways for testing the cognitive abilities of students. CATs are based on Item Response Theory (IRT) which is based on item selection and ability estimation using statistical methods of maximum information selection/selection from posterior and maximum-likelihood (ML)/maximum a posteriori (MAP) estimators respectively. This study aims at combining both classical and Bayesian approaches to IRT to create a dataset which is then fed to a neural network which automates the process of ability estimation and then comparing it to traditional CAT models designed using IRT. This study uses python as the base coding language, pymc for statistical modelling of the IRT and scikit-learn for neural network implementations. On creation of the model and on comparison, it is found that the Neural Network based model performs 7-10% worse than the IRT model for score estimations. Although performing poorly, compared to the IRT model, the neural network model can be beneficially used in back-ends for reducing time complexity as the IRT model would have to re-calculate the ability every-time it gets a request whereas the prediction from a neural network could be done in a single step for an existing trained Regressor. This study also proposes a new kind of framework whereby the neural network model could be used to incorporate feature sets, other than the normal IRT feature set and use a neural network’s capacity of learning unknown functions to give rise to better CAT models. Categorical features like test type, etc. could be learnt and incorporated in IRT functions with the help of techniques like logistic regression and can be used to learn functions and expressed as models which may not be trivial to be expressed via equations. This kind of a framework, when implemented would be highly advantageous in psychometrics and cognitive assessments. This study gives a brief overview as to how neural networks can be used in adaptive testing, not only by reducing time-complexity but also by being able to incorporate newer and better datasets which would eventually lead to higher quality testing.Keywords: computer adaptive tests, item response theory, machine learning, neural networks
Procedia PDF Downloads 1801357 Characteristics and Feature Analysis of PCF Labeling among Construction Materials
Authors: Sung-mo Seo, Chang-u Chae
Abstract:
The Product Carbon Footprint Labeling has been run for more than four years by the Ministry of Environment and there are number of products labeled by KEITI, as for declaring products with their carbon emission during life cycle stages. There are several categories for certifying products by the characteristics of usage. Building products which are applied to a building as combined components. In this paper, current status of PCF labeling has been compared with LCI DB for data composition. By this comparative analysis, we suggest carbon labeling development.Keywords: carbon labeling, LCI DB, building materials, life cycle assessment
Procedia PDF Downloads 4231356 Implementation of a Serializer to Represent PHP Objects in the Extensible Markup Language
Authors: Lidia N. Hernández-Piña, Carlos R. Jaimez-González
Abstract:
Interoperability in distributed systems is an important feature that refers to the communication of two applications written in different programming languages. This paper presents a serializer and a de-serializer of PHP objects to and from XML, which is an independent library written in the PHP programming language. The XML generated by this serializer is independent of the programming language, and can be used by other existing Web Objects in XML (WOX) serializers and de-serializers, which allow interoperability with other object-oriented programming languages.Keywords: interoperability, PHP object serialization, PHP to XML, web objects in XML, WOX
Procedia PDF Downloads 2401355 Investigating Software Engineering Challenges in Game Development
Authors: Fawad Zaidi
Abstract:
This paper discusses a variety of challenges and solutions involved with creating computer games and the issues faced by the software engineers working in this field. This review further investigates the articles coverage of project scope and the problem of feature creep that appears to be inherent with game development. The paper tries to answer the following question: Is this a problem caused by a shortage, or bad software engineering practices, or is this outside the control of the software engineering component of the game production process?Keywords: software engineering, computer games, software applications, development
Procedia PDF Downloads 4801354 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales
Authors: Philipp Sommer, Amgad Agoub
Abstract:
The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning
Procedia PDF Downloads 621353 A Study on Real-Time Fluorescence-Photoacoustic Imaging System for Mouse Thrombosis Monitoring
Authors: Sang Hun Park, Moung Young Lee, Su Min Yu, Hyun Sang Jo, Ji Hyeon Kim, Chul Gyu Song
Abstract:
A near-infrared light source used as a light source in the fluorescence imaging system is suitable for use in real-time during the operation since it has no interference in surgical vision. However, fluorescence images do not have depth information. In this paper, we configured the device with the research on molecular imaging systems for monitoring thrombus imaging using fluorescence and photoacoustic. Fluorescence imaging was performed using a phantom experiment in order to search the exact location, and the Photoacoustic image was in order to detect the depth. Fluorescence image obtained when evaluated through current phantom experiments when the concentration of the contrast agent is 25μg / ml, it was confirmed that it looked sharper. The phantom experiment is has shown the possibility with the fluorescence image and photoacoustic image using an indocyanine green contrast agent. For early diagnosis of cardiovascular diseases, more active research with the fusion of different molecular imaging devices is required.Keywords: fluorescence, photoacoustic, indocyanine green, carotid artery
Procedia PDF Downloads 6041352 Wobbled Laser Beam Welding for Macro-to Micro-Fabrication Process
Authors: Farzad Vakili-Farahani, Joern Lungershausen, Kilian Wasmer
Abstract:
Wobbled laser beam welding, fast oscillations of a tiny laser beam within a designed path (weld geometry) during the laser pulse illumination, opens new possibilities to improve the marco-to micro-manufacturing process. The present work introduces the wobbled laser beam welding as a robust welding strategy for improving macro-to micro-fabrication process, e.g., the laser processing for gap-bridging and packaging industry. The typical requisites and relevant equipment for the development of a wobbled laser processing unit are addressed, including a suitable laser source, light delivery system, optics, proper beam deflection system and the design geometry. In addition, experiments have been carried out on titanium plate to compare the results of wobbled laser welding with conventional pulsed laser welding. As compared to the pulsed laser welding, the wobbled laser welding offers a much greater fusion area (i.e. additional molten material) while minimizing the HAZ and provides a better confinement of the material microstructural changes.Keywords: wobbled laser beam welding, wobbling function, beam oscillation, micro welding
Procedia PDF Downloads 3331351 Comparison of Machine Learning-Based Models for Predicting Streptococcus pyogenes Virulence Factors and Antimicrobial Resistance
Authors: Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Diego Santibañez Oyarce, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
Streptococcus pyogenes is a gram-positive bacteria involved in a wide range of diseases and is a major-human-specific bacterial pathogen. In Chile, this year the 'Ministerio de Salud' declared an alert due to the increase in strains throughout the year. This increase can be attributed to the multitude of factors including antimicrobial resistance (AMR) and Virulence Factors (VF). Understanding these VF and AMR is crucial for developing effective strategies and improving public health responses. Moreover, experimental identification and characterization of these pathogenic mechanisms are labor-intensive and time-consuming. Therefore, new computational methods are required to provide robust techniques for accelerating this identification. Advances in Machine Learning (ML) algorithms represent the opportunity to refine and accelerate the discovery of VF associated with Streptococcus pyogenes. In this work, we evaluate the accuracy of various machine learning models in predicting the virulence factors and antimicrobial resistance of Streptococcus pyogenes, with the objective of providing new methods for identifying the pathogenic mechanisms of this organism.Our comprehensive approach involved the download of 32,798 genbank files of S. pyogenes from NCBI dataset, coupled with the incorporation of data from Virulence Factor Database (VFDB) and Antibiotic Resistance Database (CARD) which contains sequences of AMR gene sequence and resistance profiles. These datasets provided labeled examples of both virulent and non-virulent genes, enabling a robust foundation for feature extraction and model training. We employed preprocessing, characterization and feature extraction techniques on primary nucleotide/amino acid sequences and selected the optimal more for model training. The feature set was constructed using sequence-based descriptors (e.g., k-mers and One-hot encoding), and functional annotations based on database prediction. The ML models compared are logistic regression, decision trees, support vector machines, neural networks among others. The results of this work show some differences in accuracy between the algorithms, these differences allow us to identify different aspects that represent unique opportunities for a more precise and efficient characterization and identification of VF and AMR. This comparative analysis underscores the value of integrating machine learning techniques in predicting S. pyogenes virulence and AMR, offering potential pathways for more effective diagnostic and therapeutic strategies. Future work will focus on incorporating additional omics data, such as transcriptomics, and exploring advanced deep learning models to further enhance predictive capabilities.Keywords: antibiotic resistance, streptococcus pyogenes, virulence factors., machine learning
Procedia PDF Downloads 411350 Fusionopolis: The Most Decisive Economic Power Centers of the 21st Century
Authors: Norbert Csizmadia
Abstract:
The 21st Century's main power centers are the cities. More than 52% of the world’s population lives in cities, in particular in the megacities which have a population over 10 million people and is still growing. According to various research and forecasts, the main economic concentration will be in 40 megacities and global centers. Based on various competitiveness analyzes and indices, global city centers, and city networks are outlined, but if we look at other aspects of urban development like complexity, connectivity, creativity, technological development, viability, green cities, pedestrian and child friendly cities, creative and cultural centers, cultural spaces and knowledge centers, we get a city competitiveness index with quite new complex indicators. The research shows this result. In addition to the megacities and the global centers, with the investigation of functionality, we got 64 so-called ‘fusiononopolis’ (i.e., fusion-polis) which stand for the most decisive economic power centers of the 21st century. In this city competition Asian centers considerably rise, as the world's functional city competitiveness index is being formed.Keywords: economic geography, human geography, technological development, urbanism
Procedia PDF Downloads 3651349 Occipital Squama Convexity and Neurocranial Covariation in Extant Homo sapiens
Authors: Miranda E. Karban
Abstract:
A distinctive pattern of occipital squama convexity, known as the occipital bun or chignon, has traditionally been considered a derived Neandertal trait. However, some early modern and extant Homo sapiens share similar occipital bone morphology, showing pronounced internal and external occipital squama curvature and paralambdoidal flattening. It has been posited that these morphological patterns are homologous in the two groups, but this claim remains disputed. Many developmental hypotheses have been proposed, including assertions that the chignon represents a developmental response to a long and narrow cranial vault, a narrow or flexed basicranium, or a prognathic face. These claims, however, remain to be metrically quantified in a large subadult sample, and little is known about the feature’s developmental, functional, or evolutionary significance. This study assesses patterns of chignon development and covariation in a comparative sample of extant human growth study cephalograms. Cephalograms from a total of 549 European-derived North American subjects (286 male, 263 female) were scored on a 5-stage ranking system of chignon prominence. Occipital squama shape was found to exist along a continuum, with 34 subjects (6.19%) possessing defined chignons, and 54 subjects (9.84%) possessing very little occipital squama convexity. From this larger sample, those subjects represented by a complete radiographic series were selected for metric analysis. Measurements were collected from lateral and posteroanterior (PA) cephalograms of 26 subjects (16 male, 10 female), each represented at 3 longitudinal age groups. Age group 1 (range: 3.0-6.0 years) includes subjects during a period of rapid brain growth. Age group 2 (range: 8.0-9.5 years) includes subjects during a stage in which brain growth has largely ceased, but cranial and facial development continues. Age group 3 (range: 15.9-20.4 years) includes subjects at their adult stage. A total of 16 landmarks and 153 sliding semi-landmarks were digitized at each age point, and geometric morphometric analyses, including relative warps analysis and two-block partial least squares analysis, were conducted to study covariation patterns between midsagittal occipital bone shape and other aspects of craniofacial morphology. A convex occipital squama was found to covary significantly with a low, elongated neurocranial vault, and this pattern was found to exist from the youngest age group. Other tested patterns of covariation, including cranial and basicranial breadth, basicranial angle, midcoronal cranial vault shape, and facial prognathism, were not found to be significant at any age group. These results suggest that the chignon, at least in this sample, should not be considered an independent feature, but rather the result of developmental interactions relating to neurocranial elongation. While more work must be done to quantify chignon morphology in fossil subadults, this study finds no evidence to disprove the developmental homology of the feature in modern humans and Neandertals.Keywords: chignon, craniofacial covariation, human cranial development, longitudinal growth study, occipital bun
Procedia PDF Downloads 204