Search results for: vector quantization
725 Asymmetric Price Transmission in Rice: A Regional Analysis in Peru
Authors: Renzo Munoz-Najar, Cristina Wong, Daniel De La Torre Ugarte
Abstract:
The literature on price transmission usually deals with asymmetries related to different commodities and/or the short and long term. The role of domestic regional differences and the relationship with asymmetries within a country are usually left out. This paper looks at the asymmetry in the transmission of rice prices from the international price to the farm gate prices in four northern regions of Peru for the last period 2001-2016. These regions are San Martín, Piura, Lambayeque and La Libertad. The relevance of the study lies in its ability to assess the need for policies aimed at improving the competitiveness of the market and ensuring the benefit of producers. There are differences in planting and harvesting dates, as well as in geographic location that justify the hypothesis of the existence of differences in the price transition asymmetries between these regions. Those differences are due to at least three factors geography, infrastructure development, and distribution systems. For this, the Threshold Vector Error Correction Model and the Autoregressive Vector Model with Threshold are used. Both models, collect asymmetric effects in the price adjustments. In this way, it is sought to verify that farm prices react more to falls than increases in international prices due to the high bargaining power of intermediaries. The results of the investigation suggest that the transmission of prices is significant only for Lambayeque and La Libertad. Likewise, the asymmetry in the transmission of prices for these regions is checked. However, these results are not met for San Martin and Piura, the main rice producers nationwide. A significant price transmission is verified only in the Lambayeque and La Libertad regions. San Martin and Piura, in spite of being the main rice producing regions of Peru, do not present a significant transmission of international prices; a high degree of self-sufficient supply might be at the center of the logic for this result. An additional finding is the short-term adjustment with respect to international prices, it is higher in La Libertad compared to Lambayeque, which could be explained by the greater bargaining power of intermediaries in the last-mentioned region due to the greater technological development in the mills.Keywords: asymmetric price transmission, rice prices, price transmission, regional economics
Procedia PDF Downloads 231724 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 137723 Early Diagnosis of Myocardial Ischemia Based on Support Vector Machine and Gaussian Mixture Model by Using Features of ECG Recordings
Authors: Merve Begum Terzi, Orhan Arikan, Adnan Abaci, Mustafa Candemir
Abstract:
Acute myocardial infarction is a major cause of death in the world. Therefore, its fast and reliable diagnosis is a major clinical need. ECG is the most important diagnostic methodology which is used to make decisions about the management of the cardiovascular diseases. In patients with acute myocardial ischemia, temporary chest pains together with changes in ST segment and T wave of ECG occur shortly before the start of myocardial infarction. In this study, a technique which detects changes in ST/T sections of ECG is developed for the early diagnosis of acute myocardial ischemia. For this purpose, a database of real ECG recordings that contains a set of records from 75 patients presenting symptoms of chest pain who underwent elective percutaneous coronary intervention (PCI) is constituted. 12-lead ECG’s of the patients were recorded before and during the PCI procedure. Two ECG epochs, which are the pre-inflation ECG which is acquired before any catheter insertion and the occlusion ECG which is acquired during balloon inflation, are analyzed for each patient. By using pre-inflation and occlusion recordings, ECG features that are critical in the detection of acute myocardial ischemia are identified and the most discriminative features for the detection of acute myocardial ischemia are extracted. A classification technique based on support vector machine (SVM) approach operating with linear and radial basis function (RBF) kernels to detect ischemic events by using ST-T derived joint features from non-ischemic and ischemic states of the patients is developed. The dataset is randomly divided into training and testing sets and the training set is used to optimize SVM hyperparameters by using grid-search method and 10fold cross-validation. SVMs are designed specifically for each patient by tuning the kernel parameters in order to obtain the optimal classification performance results. As a result of implementing the developed classification technique to real ECG recordings, it is shown that the proposed technique provides highly reliable detections of the anomalies in ECG signals. Furthermore, to develop a detection technique that can be used in the absence of ECG recording obtained during healthy stage, the detection of acute myocardial ischemia based on ECG recordings of the patients obtained during ischemia is also investigated. For this purpose, a Gaussian mixture model (GMM) is used to represent the joint pdf of the most discriminating ECG features of myocardial ischemia. Then, a Neyman-Pearson type of approach is developed to provide detection of outliers that would correspond to acute myocardial ischemia. Neyman – Pearson decision strategy is used by computing the average log likelihood values of ECG segments and comparing them with a range of different threshold values. For different discrimination threshold values and number of ECG segments, probability of detection and probability of false alarm values are computed, and the corresponding ROC curves are obtained. The results indicate that increasing number of ECG segments provide higher performance for GMM based classification. Moreover, the comparison between the performances of SVM and GMM based classification showed that SVM provides higher classification performance results over ECG recordings of considerable number of patients.Keywords: ECG classification, Gaussian mixture model, Neyman–Pearson approach, support vector machine
Procedia PDF Downloads 162722 Smartphone-Based Human Activity Recognition by Machine Learning Methods
Authors: Yanting Cao, Kazumitsu Nawata
Abstract:
As smartphones upgrading, their software and hardware are getting smarter, so the smartphone-based human activity recognition will be described as more refined, complex, and detailed. In this context, we analyzed a set of experimental data obtained by observing and measuring 30 volunteers with six activities of daily living (ADL). Due to the large sample size, especially a 561-feature vector with time and frequency domain variables, cleaning these intractable features and training a proper model becomes extremely challenging. After a series of feature selection and parameters adjustment, a well-performed SVM classifier has been trained.Keywords: smart sensors, human activity recognition, artificial intelligence, SVM
Procedia PDF Downloads 144721 The Biosphere as a Supercomputer Directing and Controlling Evolutionary Processes
Authors: Igor A. Krichtafovitch
Abstract:
The evolutionary processes are not linear. Long periods of quiet and slow development turn to rather rapid emergences of new species and even phyla. During Cambrian explosion, 22 new phyla were added to the previously existed 3 phyla. Contrary to the common credence the natural selection or a survival of the fittest cannot be accounted for the dominant evolution vector which is steady and accelerated advent of more complex and more intelligent living organisms. Neither Darwinism nor alternative concepts including panspermia and intelligent design propose a satisfactory solution for these phenomena. The proposed hypothesis offers a logical and plausible explanation of the evolutionary processes in general. It is based on two postulates: a) the Biosphere is a single living organism, all parts of which are interconnected, and b) the Biosphere acts as a giant biological supercomputer, storing and processing the information in digital and analog forms. Such supercomputer surpasses all human-made computers by many orders of magnitude. Living organisms are the product of intelligent creative action of the biosphere supercomputer. The biological evolution is driven by growing amount of information stored in the living organisms and increasing complexity of the biosphere as a single organism. Main evolutionary vector is not a survival of the fittest but an accelerated growth of the computational complexity of the living organisms. The following postulates may summarize the proposed hypothesis: biological evolution as a natural life origin and development is a reality. Evolution is a coordinated and controlled process. One of evolution’s main development vectors is a growing computational complexity of the living organisms and the biosphere’s intelligence. The intelligent matter which conducts and controls global evolution is a gigantic bio-computer combining all living organisms on Earth. The information is acting like a software stored in and controlled by the biosphere. Random mutations trigger this software, as is stipulated by Darwinian Evolution Theories, and it is further stimulated by the growing demand for the Biosphere’s global memory storage and computational complexity. Greater memory volume requires a greater number and more intellectually advanced organisms for storing and handling it. More intricate organisms require the greater computational complexity of biosphere in order to keep control over the living world. This is an endless recursive endeavor with accelerated evolutionary dynamic. New species emerge when two conditions are met: a) crucial environmental changes occur and/or global memory storage volume comes to its limit and b) biosphere computational complexity reaches critical mass capable of producing more advanced creatures. The hypothesis presented here is a naturalistic concept of life creation and evolution. The hypothesis logically resolves many puzzling problems with the current state evolution theory such as speciation, as a result of GM purposeful design, evolution development vector, as a need for growing global intelligence, punctuated equilibrium, happening when two above conditions a) and b) are met, the Cambrian explosion, mass extinctions, happening when more intelligent species should replace outdated creatures.Keywords: supercomputer, biological evolution, Darwinism, speciation
Procedia PDF Downloads 166720 PolyScan: Comprehending Human Polymicrobial Infections for Vector-Borne Disease Diagnostic Purposes
Authors: Kunal Garg, Louise Theusen Hermansan, Kanoktip Puttaraska, Oliver Hendricks, Heidi Pirttinen, Leona Gilbert
Abstract:
The Germ Theory (one infectious determinant is equal to one disease) has unarguably evolved our capability to diagnose and treat infectious diseases over the years. Nevertheless, the advent of technology, climate change, and volatile human behavior has brought about drastic changes in our environment, leading us to question the relevance of the Germ Theory in our day, i.e. will vector-borne disease (VBD) sufferers produce multiple immune responses when tested for multiple microbes? Vector diseased patients producing multiple immune responses to different microbes would evidently suggest human polymicrobial infections (HPI). Ongoing diagnostic tools are exceedingly unequipped with the current research findings that would aid in diagnosing patients for polymicrobial infections. This shortcoming has caused misdiagnosis at very high rates, consequently diminishing the patient’s quality of life due to inadequate treatment. Equipped with the state-of-art scientific knowledge, PolyScan intends to address the pitfalls in current VBD diagnostics. PolyScan is a multiplex and multifunctional enzyme linked Immunosorbent assay (ELISA) platform that can test for numerous VBD microbes and allow simultaneous screening for multiple types of antibodies. To validate PolyScan, Lyme Borreliosis (LB) and spondyloarthritis (SpA) patient groups (n = 54 each) were tested for Borrelia burgdorferi, Borrelia burgdorferi Round Body (RB), Borrelia afzelii, Borrelia garinii, and Ehrlichia chaffeensis against IgM and IgG antibodies. LB serum samples were obtained from Germany and SpA serum samples were obtained from Denmark under relevant ethical approvals. The SpA group represented chronic LB stage because reactive arthritis (SpA subtype) in the form of Lyme arthritis links to LB. It was hypothesized that patients from both the groups will produce multiple immune responses that as a consequence would evidently suggest HPI. It was also hypothesized that the multiple immune response proportion in SpA patient group would be significantly larger when compared to the LB patient group across both antibodies. It was observed that 26% LB patients and 57% SpA patients produced multiple immune responses in contrast to 33% LB patients and 30% SpA patients that produced solitary immune responses when tested against IgM. Similarly, 52% LB patients and an astounding 73% SpA patients produced multiple immune responses in contrast to 30% LB patients and 8% SpA patients that produced solitary immune responses when tested against IgG. Interestingly, IgM immune dysfunction in both the patient groups was also recorded. Atypically, 6% of the unresponsive 18% LB with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Similarly, 12% of the unresponsive 19% SpA with IgG antibody was recorded producing multiple immune responses with the IgM antibody. Thus, results not only supported hypothesis but also suggested that IgM may atypically prevail longer than IgG. The PolyScan concept will aid clinicians to detect patients for early, persistent, late, polymicrobial, & immune dysfunction conditions linked to different VBD. PolyScan provides a paradigm shift for the VBD diagnostic industry to follow that will drastically shorten patient’s time to receive adequate treatment.Keywords: diagnostics, immune dysfunction, polymicrobial, TICK-TAG
Procedia PDF Downloads 333719 Prediction of Formation Pressure Using Artificial Intelligence Techniques
Authors: Abdulmalek Ahmed
Abstract:
Formation pressure is the main function that affects drilling operation economically and efficiently. Knowing the pore pressure and the parameters that affect it will help to reduce the cost of drilling process. Many empirical models reported in the literature were used to calculate the formation pressure based on different parameters. Some of these models used only drilling parameters to estimate pore pressure. Other models predicted the formation pressure based on log data. All of these models required different trends such as normal or abnormal to predict the pore pressure. Few researchers applied artificial intelligence (AI) techniques to predict the formation pressure by only one method or a maximum of two methods of AI. The objective of this research is to predict the pore pressure based on both drilling parameters and log data namely; weight on bit, rotary speed, rate of penetration, mud weight, bulk density, porosity and delta sonic time. A real field data is used to predict the formation pressure using five different artificial intelligence (AI) methods such as; artificial neural networks (ANN), radial basis function (RBF), fuzzy logic (FL), support vector machine (SVM) and functional networks (FN). All AI tools were compared with different empirical models. AI methods estimated the formation pressure by a high accuracy (high correlation coefficient and low average absolute percentage error) and outperformed all previous. The advantage of the new technique is its simplicity, which represented from its estimation of pore pressure without the need of different trends as compared to other models which require a two different trend (normal or abnormal pressure). Moreover, by comparing the AI tools with each other, the results indicate that SVM has the advantage of pore pressure prediction by its fast processing speed and high performance (a high correlation coefficient of 0.997 and a low average absolute percentage error of 0.14%). In the end, a new empirical correlation for formation pressure was developed using ANN method that can estimate pore pressure with a high precision (correlation coefficient of 0.998 and average absolute percentage error of 0.17%).Keywords: Artificial Intelligence (AI), Formation pressure, Artificial Neural Networks (ANN), Fuzzy Logic (FL), Support Vector Machine (SVM), Functional Networks (FN), Radial Basis Function (RBF)
Procedia PDF Downloads 150718 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms
Authors: Seulki Lee, Seoung Bum Kim
Abstract:
Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process
Procedia PDF Downloads 300717 Communicative Strategies in Colombian Political Speech: On the Example of the Speeches of Francia Marquez
Authors: Danila Arbuzov
Abstract:
In this article the author examines the communicative strategies used in the Colombian political discourse, following the example of the speeches of the Vice President of Colombia Francia Marquez, who took office in 2022 and marked a new development vector for the Colombian nation. The lexical and syntactic means are analyzed to achieve the communicative objectives. The material presented may be useful for those who are interested in investigating various aspects of discursive linguistics, particularly political discourse, as well as the implementation of communicative strategies in certain types of discourse.Keywords: political discourse, communication strategies, Colombian political discourse, Colombia, manipulation
Procedia PDF Downloads 115716 Improved Computational Efficiency of Machine Learning Algorithm Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK
Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick
Abstract:
The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning archetypal that could forecast COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organisation (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data is split into 8:2 ratio for training and testing purposes to forecast future new COVID cases. Support Vector Machines (SVM), Random Forests, and linear regression algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID cases is evaluated. Random Forest outperformed the other two Machine Learning algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n=30. The mean square error obtained for Random Forest is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis Random Forest algorithm can perform more effectively and efficiently in predicting the new COVID cases, which could help the health sector to take relevant control measures for the spread of the virus.Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest
Procedia PDF Downloads 121715 Comparing SVM and Naïve Bayes Classifier for Automatic Microaneurysm Detections
Authors: A. Sopharak, B. Uyyanonvara, S. Barman
Abstract:
Diabetic retinopathy is characterized by the development of retinal microaneurysms. The damage can be prevented if disease is treated in its early stages. In this paper, we are comparing Support Vector Machine (SVM) and Naïve Bayes (NB) classifiers for automatic microaneurysm detection in images acquired through non-dilated pupils. The Nearest Neighbor classifier is used as a baseline for comparison. Detected microaneurysms are validated with expert ophthalmologists’ hand-drawn ground-truths. The sensitivity, specificity, precision and accuracy of each method are also compared.Keywords: diabetic retinopathy, microaneurysm, naive Bayes classifier, SVM classifier
Procedia PDF Downloads 329714 The Impact of Geopolitical Risks and the Oil Price Fluctuations on the Kuwaiti Financial Market
Authors: Layal Mansour
Abstract:
The aim of this paper is to identify whether oil price volatility or geopolitical risks can predict future financial stress periods or economic recessions in Kuwait. We construct the first Financial Stress Index for Kuwait (FSIK) that includes informative vulnerable indicators of the main financial sectors: the banking sector, the equities market, and the foreign exchange market. The study covers the period from 2000 to 2020, so it includes the two recent most devastating world economic crises with oil price fluctuation: the Covid-19 pandemic crisis and Ukraine-Russia War. All data are taken by the central bank of Kuwait, the World Bank, IMF, DataStream, and from Federal Reserve System St Louis. The variables are computed as the percentage growth rate, then standardized and aggregated into one index using the variance equal weights method, the most frequently used in the literature. The graphical FSIK analysis provides detailed information (by dates) to policymakers on how internal financial stability depends on internal policy and events such as government elections or resignation. It also shows how monetary authorities or internal policymakers’ decisions to relieve personal loans or increase/decrease the public budget trigger internal financial instability. The empirical analysis under vector autoregression (VAR) models shows the dynamic causal relationship between the oil price fluctuation and the Kuwaiti economy, which relies heavily on the oil price. Similarly, using vector autoregression (VAR) models to assess the impact of the global geopolitical risks on Kuwaiti financial stability, results reveal whether Kuwait is confronted with or sheltered from geopolitical risks. The Financial Stress Index serves as a guide for macroprudential regulators in order to understand the weakness of the overall Kuwaiti financial market and economy regardless of the Kuwaiti dinar strength and exchange rate stability. It helps policymakers predict future stress periods and, thus, address alternative cushions to confront future possible financial threats.Keywords: Kuwait, financial stress index, causality test, VAR, oil price, geopolitical risks
Procedia PDF Downloads 83713 Monitoring Systemic Risk in the Hedge Fund Sector
Authors: Frank Hespeler, Giuseppe Loiacono
Abstract:
We propose measures for systemic risk generated through intra-sectorial interdependencies in the hedge fund sector. These measures are based on variations in the average cross-effects of funds showing significant interdependency between their individual returns and the moments of the sector’s return distribution. The proposed measures display a high ability to identify periods of financial distress, are robust to modifications in the underlying econometric model and are consistent with intuitive interpretation of the results.Keywords: hedge funds, systemic risk, vector autoregressive model, risk monitoring
Procedia PDF Downloads 326712 Molecular Characterisation and Expression of Glutathione S-Transferase of Fasciola Gigantica
Authors: J. Adeppa, S. Samanta, O. K. Raina
Abstract:
Fasciolosis is a widespread economically important parasitic infection throughout the world caused by Fasciola hepatica and F. gigantica. In order to identify novel immunogen conferring significant protection against fasciolosis, currently, research has been focused on the defined antigens viz. glutathione S-transferase, fatty acid binding protein, cathepsin-L, fluke hemoglobin, paramyosin, myosin and F. hepatica- Kunitz Type Molecule. Among various antigens, GST which plays a crucial role in detoxification processes, i.e. phase II defense mechanism of this parasite, has a unique position as a novel vaccine candidate and a drug target in the control of this disease. For producing the antigens in large quantities and their purification to complete homogeneity, the recombinant DNA technology has become an important tool to achieve this milestone. RT- PCR was carried out using F. gigantica total RNA as template, and an amplicon of 657 bp GST gene was obtained. TA cloning vector was used for cloning of this gene, and the presence of insert was confirmed by blue-white selection for recombinant colonies. Sequence analysis of the present isolate showed 99.1% sequence homology with the published sequence of the F. gigantica GST gene of cattle origin (accession no. AF112657), with six nucleotide changes at 72, 74, 423, 513, 549 and 627th bp found in the present isolate, causing an overall change of 4 amino acids. The 657 bp GST gene was cloned at BamH1 and HindIII restriction sites of the prokaryotic expression vector pPROEXHTb in frame with six histidine residues and expressed in E. coli DH5α. Recombinant protein was purified from the bacterial lysate under non-denaturing conditions by the process of sonication after lysozyme treatment and subjecting the soluble fraction of the bacterial lysate to Ni-NTA affinity chromatography. Western blotting with rabbit hyper-immune serum showed immuno-reactivity with 25 kDa recombinant GST. Recombinant protein detected F. gigantica experimental as well as field infection in buffaloes by dot-ELISA. However, cross-reactivity studies on Fasciola gigantica GST antigen are needed to evaluate the utility of this protein in the serodiagnosis of fasciolosis.Keywords: fasciola gigantic, fasciola hepatica, GST, RT- PCR
Procedia PDF Downloads 187711 Impact of Urbanization Growth on Disease Spread and Outbreak Response: Exploring Strategies for Enhancing Resilience
Authors: Raquel Vianna Duarte Cardoso, Eduarda Lobato Faria, José Jorge Boueri
Abstract:
Rapid urbanization has transformed the global landscape, presenting significant challenges to public health. This article delves into the impact of urbanization on the spread of infectious diseases in cities and identifies crucial strategies to enhance urban community resilience. Massive urbanization over recent decades has created conducive environments for the rapid spread of diseases due to population density, mobility, and unequal living conditions. Urbanization has been observed to increase exposure to pathogens and foster conditions conducive to disease outbreaks, including seasonal flu, vector-borne diseases, and respiratory infections. In order to tackle these issues, a range of cross-disciplinary approaches are suggested. These encompass the enhancement of urban healthcare infrastructure, emphasizing the need for robust investments in hospitals, clinics, and healthcare systems to keep pace with the burgeoning healthcare requirements in urban environments. Moreover, the establishment of disease monitoring and surveillance mechanisms is indispensable, as it allows for the timely detection of outbreaks, enabling swift responses. Additionally, community engagement and education play a pivotal role in advocating for personal hygiene, vaccination, and preventive measures, thus playing a pivotal role in diminishing disease transmission. Lastly, the promotion of sustainable urban planning, which includes the creation of cities with green spaces, access to clean water, and proper sanitation, can significantly mitigate the risks associated with waterborne and vector-borne diseases. The article is based on a review of scientific literature, and it offers a comprehensive insight into the complexities of the relationship between urbanization and health. It places a strong emphasis on the urgent need for integrated approaches to improve urban resilience in the face of health challenges.Keywords: infectious diseases dissemination, public health, urbanization impacts, urban resilience
Procedia PDF Downloads 78710 Review of Different Machine Learning Algorithms
Authors: Syed Romat Ali Shah, Bilal Shoaib, Saleem Akhtar, Munib Ahmad, Shahan Sadiqui
Abstract:
Classification is a data mining technique, which is recognizedon Machine Learning (ML) algorithm. It is used to classifythe individual articlein a knownofinformation into a set of predefinemodules or group. Web mining is also a portion of that sympathetic of data mining methods. The main purpose of this paper to analysis and compare the performance of Naïve Bayse Algorithm, Decision Tree, K-Nearest Neighbor (KNN), Artificial Neural Network (ANN)and Support Vector Machine (SVM). This paper consists of different ML algorithm and their advantages and disadvantages and also define research issues.Keywords: Data Mining, Web Mining, classification, ML Algorithms
Procedia PDF Downloads 303709 Tetrad field and torsion vectors in Schwarzschild solution
Authors: M.A.Bakry1, *, Aryn T. Shafeek1, +
Abstract:
In this article, absolute Parallelism geometry is used to study the torsional gravitational field. And discovered the tetrad fields, torsion vector, and torsion scalar of Schwarzschild space. The new solution of the torsional gravitational field is a generalization of Schwarzschild in the context of general relativity. The results are applied to the planetary orbits.Keywords: absolute parallelism geometry, tetrad fields, torsion vectors, torsion scalar
Procedia PDF Downloads 143708 Distribution of Malaria-Infected Anopheles Mosquitoes in Kudat, Ranau and Tenom of Sabah, Malaysia
Authors: Ahmad Fakhriy Hassan, Rohani Ahmad, Zurainee Mohamed Nor, Wan Najdah Wan Mohamad Ali
Abstract:
In Malaysia, it was realized that while the incidence of human malaria is decreasing, the incidence of Plasmodium knowlesi malaria appears to be on the rise, especially in rural areas of Sabah, East Malaysia. The primary vector for P. knowlesi malaria in Sabah is An. balabacensis a species found abundant in rural areas, shown to rest and feed outdoor throughout the night, which makes its control very challenging. This study aims to examine the distribution of malaria-infected Anopheles mosquitoes in three areas in Sabah, namely Kudat, Ranau, and Tenom, known as areas in Sabah that presented high number of malaria cases. Briefly, mosquitoes were caught every 6 weeks for the period of 18 months using Human Landing Catching (HLC) technique from May 2016 to November 2017. Identification of species was done using microscopy and molecular methods. Molecular method is also used to detect malaria parasite in all mosquito collected. An. balabacensis was present in all the study areas. In Kudat, six other Anopheles species were also detected, namely, An. barumbrosus, An. latens, An. letifer, An. maculatus, An. sundaicus and An. tesselatus. In Ranau five other Anopheles species were detected, namely, An. barumbrosus, An. donaldi., An. hodgkini, An. maculatus, and An. tesselatus while in Tenom seven more species An. donaldi, An. umbrosus, An. barumbrosus, An.latens, An. hodgkini, An. maculatus, and An. tesselatus were detected. This study showed 24% out of 259, 39% out of 127, and 26% out of 265 Anopheles mosquito collected in Kudat, Ranau, and Tenom were detected positive for malaria parasite respectively. In Kudat An. balabacensis, An. barumbrosus, An. latens, An. maculatus, An. sundaicus and An. tesselatus were the six out of eight Anopheles species that were found infected with malaria parasite. All Anopheles species collected in Ranau were positive for malaria while In Tenom, only five out of eight species; An. balabacensus, An. donaldi, An. hodgkini, An. maculatus, and An. latens were detected positive for malaria parasite. Interestingly, for all study areas An. balabacensis was shown to be the only species infected with four malaria species; P. falciparum, P. knowlesi, P. vivax, and Plasmodium sp. This finding clearly indicates that An. balabacensis is the dominant malaria vector in Kudat, Ranau, and Tenom.Keywords: Anopheles balabacensis, human landing catching technique, nested PCR, Plasmodium knowlesi, Simian malaria
Procedia PDF Downloads 147707 Transformation of ectA Gene From Halomonas elongata in Tomato Plant
Authors: Narayan Moger, Divya B., Preethi Jambagi, Krishnaveni C. K., Apsana M. R., B. R. Patil, Basvaraj Bagewadi
Abstract:
Salinity is one of the major threats to world food security. Considering the requirement for salt tolerant crop plants in the present study was undertaken to clone and transferred the salt tolerant ectA gene from marine ecosystem into agriculture crop system to impart salinity tolerance. Ectoine is the compatible solute which accumulates in the cell membrane, is known to be involved in salt tolerance activity in most of the Halophiles. The present situation is insisting to development of salt tolerant transgenic lines to combat abiotic stress. In this background, the investigation was conducted to develop transgenic tomato lines by cloning and transferring of ectA gene is an ectoine derivative capable of enzymatic action for the production of acetyl-diaminobutyric acid. The gene ectA is involved in maintaining the osmotic balance of plants. The PCR amplified ectA gene (579bp) was cloned into T/A cloning vector (pTZ57R/T). The construct pDBJ26 containing ectA gene was sequenced by using gene specific forward and reverse primers. Sequence was analyzed using BLAST algorithm to check similarity of ectA gene with other isolates. Highest homology of 99.66 per cent was found with ectA gene sequences of isolates Halomonas elongata with the available sequence information in NCBI database. The ectA gene was further sub cloned into pRI101-AN plant expression vector and transferred into E. coli DH5α for its maintenance. Further pDNM27 was mobilized into A. tumefaciens LBA4404 through tri-parental mating system. The recombinant Agrobacterium containing pDNM27 was transferred into tomato plants through In planta plant transformation method. Out of 300 seedlings, co-cultivated only twenty-seven plants were able to well establish under the greenhouse condition. Among twenty-seven transformants only twelve plants showed amplification with gene specific primers. Further work must be extended to evaluate the transformants at T1 and T2 generations for ectoine accumulation, salinity tolerance, plant growth and development and yield.Keywords: salinity, computable solutes, ectA, transgenic, in planta transformation
Procedia PDF Downloads 81706 Relevant LMA Features for Human Motion Recognition
Authors: Insaf Ajili, Malik Mallem, Jean-Yves Didier
Abstract:
Motion recognition from videos is actually a very complex task due to the high variability of motions. This paper describes the challenges of human motion recognition, especially motion representation step with relevant features. Our descriptor vector is inspired from Laban Movement Analysis method. We propose discriminative features using the Random Forest algorithm in order to remove redundant features and make learning algorithms operate faster and more effectively. We validate our method on MSRC-12 and UTKinect datasets.Keywords: discriminative LMA features, features reduction, human motion recognition, random forest
Procedia PDF Downloads 197705 Arabic Handwriting Recognition Using Local Approach
Authors: Mohammed Arif, Abdessalam Kifouche
Abstract:
Optical character recognition (OCR) has a main role in the present time. It's capable to solve many serious problems and simplify human activities. The OCR yields to 70's, since many solutions has been proposed, but unfortunately, it was supportive to nothing but Latin languages. This work proposes a system of recognition of an off-line Arabic handwriting. This system is based on a structural segmentation method and uses support vector machines (SVM) in the classification phase. We have presented a state of art of the characters segmentation methods, after that a view of the OCR area, also we will address the normalization problems we went through. After a comparison between the Arabic handwritten characters & the segmentation methods, we had introduced a contribution through a segmentation algorithm.Keywords: OCR, segmentation, Arabic characters, PAW, post-processing, SVM
Procedia PDF Downloads 74704 A Calibration Device for Force-Torque Sensors
Authors: Nicolay Zarutskiy, Roman Bulkin
Abstract:
The paper deals with the existing methods of force-torque sensor calibration with a number of components from one to six, analyzed their advantages and disadvantages, the necessity of introduction of a calibration method. Calibration method and its constructive realization are also described here. A calibration method allows performing automated force-torque sensor calibration both with selected components of the main vector of forces and moments and with complex loading. Thus, two main advantages of the proposed calibration method are achieved: the automation of the calibration process and universality.Keywords: automation, calibration, calibration device, calibration method, force-torque sensors
Procedia PDF Downloads 647703 Cascade Multilevel Inverter-Based Grid-Tie Single-Phase and Three-Phase-Photovoltaic Power System Controlling and Modeling
Authors: Syed Masood Hussain
Abstract:
An effective control method, including system-level control and pulse width modulation for quasi-Z-source cascade multilevel inverter (qZS-CMI) based grid-tie photovoltaic (PV) power system is proposed. The system-level control achieves the grid-tie current injection, independent maximum power point tracking (MPPT) for separate PV panels, and dc-link voltage balance for all quasi-Z-source H-bridge inverter (qZS-HBI) modules. A recent upsurge in the study of photovoltaic (PV) power generation emerges, since they directly convert the solar radiation into electric power without hampering the environment. However, the stochastic fluctuation of solar power is inconsistent with the desired stable power injected to the grid, owing to variations of solar irradiation and temperature. To fully exploit the solar energy, extracting the PV panels’ maximum power and feeding them into grids at unity power factor become the most important. The contributions have been made by the cascade multilevel inverter (CMI). Nevertheless, the H-bridge inverter (HBI) module lacks boost function so that the inverter KVA rating requirement has to be increased twice with a PV voltage range of 1:2; and the different PV panel output voltages result in imbalanced dc-link voltages. However, each HBI module is a two-stage inverter, and many extra dc–dc converters not only increase the complexity of the power circuit and control and the system cost, but also decrease the efficiency. Recently, the Z-source/quasi-Z-source cascade multilevel inverter (ZS/qZS-CMI)-based PV systems were proposed. They possess the advantages of both traditional CMI and Z-source topologies. In order to properly operate the ZS/qZS-CMI, the power injection, independent control of dc-link voltages, and the pulse width modulation (PWM) are necessary. The main contributions of this paper include: 1) a novel multilevel space vector modulation (SVM) technique for the single phase qZS-CMI is proposed, which is implemented without additional resources; 2) a grid-connected control for the qZS-CMI based PV system is proposed, where the all PV panel voltage references from their independent MPPTs are used to control the grid-tie current; the dual-loop dc-link peak voltage control.Keywords: Quzi-Z source inverter, Photo voltaic power system, space vector modulation, cascade multilevel inverter
Procedia PDF Downloads 548702 Factors of Non-Conformity Behavior and the Emergence of a Ponzi Game in the Riba-Free (Interest-Free) Banking System of Iran
Authors: Amir Hossein Ghaffari Nejad, Forouhar Ferdowsi, Reza Mashhadi
Abstract:
In the interest-free banking system of Iran, the savings of society are in the form of bank deposits, and banks using the Islamic contracts, allocate the resources to applicants for obtaining facilities and credit. In the meantime, the central bank, with the aim of introducing monetary policy, determines the maximum interest rate on bank deposits in terms of macroeconomic requirements. But in recent years, the country's economic constraints with the stagflation and the consequence of the institutional weaknesses of the financial market of Iran have resulted in massive disturbances in the balance sheet of the banking system, resulting in a period of mismatch maturity in the banks' assets and liabilities and the implementation of a Ponzi game. This issue caused determination of the interest rate in long-term bank deposit contracts to be associated with non-observance of the maximum rate set by the central bank. The result of this condition was in the allocation of new sources of equipment to meet past commitments towards the old depositors and, as a result, a significant part of the supply of equipment was leaked out of the facilitating cycle and credit crunch emerged. The purpose of this study is to identify the most important factors affecting the occurrence of non-confirmatory financial banking behavior using data from 19 public and private banks of Iran. For this purpose, the causes of this non-confirmatory behavior of banks have been investigated using the panel vector autoregression method (PVAR) for the period of 2007-2015. Granger's causality test results suggest that the return of parallel markets for bank deposits, non-performing loans and the high share of the ratio of facilities to banks' deposits are all a cause of the formation of non-confirmatory behavior. Also, according to the results of impulse response functions and variance decomposition, NPL and the ratio of facilities to deposits have the highest long-term effect and also have a high contribution to explaining the changes in banks' non-confirmatory behavior in determining the interest rate on deposits.Keywords: non-conformity behavior, Ponzi Game, panel vector autoregression, nonperforming loans
Procedia PDF Downloads 219701 Source Separation for Global Multispectral Satellite Images Indexing
Authors: Aymen Bouzid, Jihen Ben Smida
Abstract:
In this paper, we propose to prove the importance of the application of blind source separation methods on remote sensing data in order to index multispectral images. The proposed method starts with Gabor Filtering and the application of a Blind Source Separation to get a more effective representation of the information contained on the observation images. After that, a feature vector is extracted from each image in order to index them. Experimental results show the superior performance of this approach.Keywords: blind source separation, content based image retrieval, feature extraction multispectral, satellite images
Procedia PDF Downloads 403700 Machine Learning Approach for Automating Electronic Component Error Classification and Detection
Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski
Abstract:
The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.Keywords: augmented reality, machine learning, object recognition, virtual laboratories
Procedia PDF Downloads 137699 Identification of the Parameters of a AC Servomotor Using Genetic Algorithm
Authors: J. G. Batista, K. N. Sousa, ¬J. L. Nunes, R. L. S. Sousa, G. A. P. Thé
Abstract:
This work deals with parameter identification of permanent magnet motors, a class of ac motor which is particularly important in industrial automation due to characteristics like applications high performance, are very attractive for applications with limited space and reducing the need to eliminate because they have reduced size and volume and can operate in a wide speed range, without independent ventilation. By using experimental data and genetic algorithm we have been able to extract values for both the motor inductance and the electromechanical coupling constant, which are then compared to measured and/or expected values.Keywords: modeling, AC servomotor, permanent magnet synchronous motor-PMSM, genetic algorithm, vector control, robotic manipulator, control
Procedia PDF Downloads 472698 Two Strain Dengue Dynamics Incorporating Temporary Cross Immunity with ADE Effect
Authors: Sunita Gakkhar, Arti Mishra
Abstract:
In this paper, a nonlinear host vector model has been proposed and analyzed for the two strain dengue dynamics incorporating ADE effect. The model considers that the asymptomatic infected people are more responsible for secondary infection than that of symptomatic ones and differentiates between them. The existence conditions are obtained for various equilibrium points. Basic reproduction number has been computed and analyzed to explore the effect of secondary infection enhancement parameter on dengue infection. Stability analyses of various equilibrium states have been performed. Numerical simulation has been done for the stability of endemic state.Keywords: dengue, ade, stability, threshold, asymptomatic, infection
Procedia PDF Downloads 431697 Relation Between Traffic Mix and Traffic Accidents in a Mixed Industrial Urban Area
Authors: Michelle Eliane Hernández-García, Angélica Lozano
Abstract:
The traffic accidents study usually contemplates the relation between factors such as the type of vehicle, its operation, and the road infrastructure. Traffic accidents can be explained by different factors, which have a greater or lower relevance. Two zones are studied, a mixed industrial zone and the extended zone of it. The first zone has mainly residential (57%), and industrial (23%) land uses. Trucks are mainly on the roads where industries are located. Four sensors give information about traffic and speed on the main roads. The extended zone (which includes the first zone) has mainly residential (47%) and mixed residential (43%) land use, and just 3% of industrial use. The traffic mix is composed mainly of non-trucks. 39 traffic and speed sensors are located on main roads. The traffic mix in a mixed land use zone, could be related to traffic accidents. To understand this relation, it is required to identify the elements of the traffic mix which are linked to traffic accidents. Models that attempt to explain what factors are related to traffic accidents have faced multiple methodological problems for obtaining robust databases. Poisson regression models are used to explain the accidents. The objective of the Poisson analysis is to estimate a vector to provide an estimate of the natural logarithm of the mean number of accidents per period; this estimate is achieved by standard maximum likelihood procedures. For the estimation of the relation between traffic accidents and the traffic mix, the database is integrated of eight variables, with 17,520 observations and six vectors. In the model, the dependent variable is the occurrence or non-occurrence of accidents, and the vectors that seek to explain it, correspond to the vehicle classes: C1, C2, C3, C4, C5, and C6, respectively, standing for car, microbus, and van, bus, unitary trucks (2 to 6 axles), articulated trucks (3 to 6 axles) and bi-articulated trucks (5 to 9 axles); in addition, there is a vector for the average speed of the traffic mix. A Poisson model is applied, using a logarithmic link function and a Poisson family. For the first zone, the Poisson model shows a positive relation among traffic accidents and C6, average speed, C3, C2, and C1 (in a decreasing order). The analysis of the coefficient shows a high relation with bi-articulated truck and bus (C6 and the C3), indicating an important participation of freight trucks. For the expanded zone, the Poisson model shows a positive relation among traffic accidents and speed average, biarticulated truck (C6), and microbus and vans (C2). The coefficients obtained in both Poisson models shows a higher relation among freight trucks and traffic accidents in the first industrial zone than in the expanded zone.Keywords: freight transport, industrial zone, traffic accidents, traffic mix, trucks
Procedia PDF Downloads 130696 Detection of Powdery Mildew Disease in Strawberry Using Image Texture and Supervised Classifiers
Authors: Sultan Mahmud, Qamar Zaman, Travis Esau, Young Chang
Abstract:
Strawberry powdery mildew (PM) is a serious disease that has a significant impact on strawberry production. Field scouting is still a major way to find PM disease, which is not only labor intensive but also almost impossible to monitor disease severity. To reduce the loss caused by PM disease and achieve faster automatic detection of the disease, this paper proposes an approach for detection of the disease, based on image texture and classified with support vector machines (SVMs) and k-nearest neighbors (kNNs). The methodology of the proposed study is based on image processing which is composed of five main steps including image acquisition, pre-processing, segmentation, features extraction and classification. Two strawberry fields were used in this study. Images of healthy leaves and leaves infected with PM (Sphaerotheca macularis) disease under artificial cloud lighting condition. Colour thresholding was utilized to segment all images before textural analysis. Colour co-occurrence matrix (CCM) was introduced for extraction of textural features. Forty textural features, related to a physiological parameter of leaves were extracted from CCM of National television system committee (NTSC) luminance, hue, saturation and intensity (HSI) images. The normalized feature data were utilized for training and validation, respectively, using developed classifiers. The classifiers have experimented with internal, external and cross-validations. The best classifier was selected based on their performance and accuracy. Experimental results suggested that SVMs classifier showed 98.33%, 85.33%, 87.33%, 93.33% and 95.0% of accuracy on internal, external-I, external-II, 4-fold cross and 5-fold cross-validation, respectively. Whereas, kNNs results represented 90.0%, 72.00%, 74.66%, 89.33% and 90.3% of classification accuracy, respectively. The outcome of this study demonstrated that SVMs classified PM disease with a highest overall accuracy of 91.86% and 1.1211 seconds of processing time. Therefore, overall results concluded that the proposed study can significantly support an accurate and automatic identification and recognition of strawberry PM disease with SVMs classifier.Keywords: powdery mildew, image processing, textural analysis, color co-occurrence matrix, support vector machines, k-nearest neighbors
Procedia PDF Downloads 122